00:00:00.001 Started by upstream project "autotest-per-patch" build number 132776 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.092 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.093 The recommended git tool is: git 00:00:00.093 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.254 > git --version # 'git version 2.39.2' 00:00:00.254 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.287 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.287 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.688 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.700 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.711 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.711 > git config core.sparsecheckout # timeout=10 00:00:04.723 > git read-tree -mu HEAD # timeout=10 00:00:04.739 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.766 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.766 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.878 [Pipeline] Start of Pipeline 00:00:04.893 [Pipeline] library 00:00:04.895 Loading library shm_lib@master 00:00:04.895 Library shm_lib@master is cached. Copying from home. 00:00:04.913 [Pipeline] node 00:48:21.008 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:48:21.010 [Pipeline] { 00:48:21.021 [Pipeline] catchError 00:48:21.023 [Pipeline] { 00:48:21.037 [Pipeline] wrap 00:48:21.045 [Pipeline] { 00:48:21.053 [Pipeline] stage 00:48:21.055 [Pipeline] { (Prologue) 00:48:21.074 [Pipeline] echo 00:48:21.076 Node: VM-host-SM9 00:48:21.083 [Pipeline] cleanWs 00:48:21.096 [WS-CLEANUP] Deleting project workspace... 00:48:21.096 [WS-CLEANUP] Deferred wipeout is used... 00:48:21.102 [WS-CLEANUP] done 00:48:21.276 [Pipeline] setCustomBuildProperty 00:48:21.362 [Pipeline] httpRequest 00:48:21.770 [Pipeline] echo 00:48:21.772 Sorcerer 10.211.164.101 is alive 00:48:21.781 [Pipeline] retry 00:48:21.783 [Pipeline] { 00:48:21.795 [Pipeline] httpRequest 00:48:21.800 HttpMethod: GET 00:48:21.800 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:48:21.801 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:48:21.802 Response Code: HTTP/1.1 200 OK 00:48:21.802 Success: Status code 200 is in the accepted range: 200,404 00:48:21.803 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:48:21.947 [Pipeline] } 00:48:21.965 [Pipeline] // retry 00:48:21.972 [Pipeline] sh 00:48:22.254 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:48:22.271 [Pipeline] httpRequest 00:48:22.671 [Pipeline] echo 00:48:22.672 Sorcerer 10.211.164.101 is alive 00:48:22.682 [Pipeline] retry 00:48:22.684 [Pipeline] { 00:48:22.698 [Pipeline] httpRequest 00:48:22.702 HttpMethod: GET 00:48:22.703 URL: http://10.211.164.101/packages/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:48:22.703 Sending request to url: http://10.211.164.101/packages/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:48:22.704 Response Code: HTTP/1.1 200 OK 00:48:22.705 Success: Status code 200 is in the accepted range: 200,404 00:48:22.705 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:48:24.972 [Pipeline] } 00:48:24.986 [Pipeline] // retry 00:48:24.993 [Pipeline] sh 00:48:25.268 + tar --no-same-owner -xf spdk_15ce1ba92a7f3803af8b26504042f979d14b95c5.tar.gz 00:48:27.812 [Pipeline] sh 00:48:28.094 + git -C spdk log --oneline -n5 00:48:28.094 15ce1ba92 lib/reduce: Send unmap to backing dev 00:48:28.094 5f032e8b7 lib/reduce: Write Zero to partial chunk when unmapping the chunks. 00:48:28.094 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:48:28.094 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:48:28.094 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:48:28.116 [Pipeline] writeFile 00:48:28.133 [Pipeline] sh 00:48:28.415 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:48:28.430 [Pipeline] sh 00:48:28.718 + cat autorun-spdk.conf 00:48:28.718 SPDK_RUN_FUNCTIONAL_TEST=1 00:48:28.718 SPDK_TEST_NVMF=1 00:48:28.718 SPDK_TEST_NVMF_TRANSPORT=tcp 00:48:28.718 SPDK_TEST_USDT=1 00:48:28.718 SPDK_TEST_NVMF_MDNS=1 00:48:28.718 SPDK_RUN_UBSAN=1 00:48:28.718 NET_TYPE=virt 00:48:28.718 SPDK_JSONRPC_GO_CLIENT=1 00:48:28.718 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:48:28.725 RUN_NIGHTLY=0 00:48:28.727 [Pipeline] } 00:48:28.743 [Pipeline] // stage 00:48:28.761 [Pipeline] stage 00:48:28.763 [Pipeline] { (Run VM) 00:48:28.778 [Pipeline] sh 00:48:29.095 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:48:29.095 + echo 'Start stage prepare_nvme.sh' 00:48:29.095 Start stage prepare_nvme.sh 00:48:29.095 + [[ -n 2 ]] 00:48:29.095 + disk_prefix=ex2 00:48:29.095 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:48:29.095 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:48:29.095 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:48:29.095 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:48:29.095 ++ SPDK_TEST_NVMF=1 00:48:29.095 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:48:29.095 ++ SPDK_TEST_USDT=1 00:48:29.095 ++ SPDK_TEST_NVMF_MDNS=1 00:48:29.095 ++ SPDK_RUN_UBSAN=1 00:48:29.095 ++ NET_TYPE=virt 00:48:29.095 ++ SPDK_JSONRPC_GO_CLIENT=1 00:48:29.095 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:48:29.095 ++ RUN_NIGHTLY=0 00:48:29.095 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:48:29.095 + nvme_files=() 00:48:29.095 + declare -A nvme_files 00:48:29.095 + backend_dir=/var/lib/libvirt/images/backends 00:48:29.095 + nvme_files['nvme.img']=5G 00:48:29.095 + nvme_files['nvme-cmb.img']=5G 00:48:29.095 + nvme_files['nvme-multi0.img']=4G 00:48:29.095 + nvme_files['nvme-multi1.img']=4G 00:48:29.095 + nvme_files['nvme-multi2.img']=4G 00:48:29.095 + nvme_files['nvme-openstack.img']=8G 00:48:29.095 + nvme_files['nvme-zns.img']=5G 00:48:29.095 + (( SPDK_TEST_NVME_PMR == 1 )) 00:48:29.095 + (( SPDK_TEST_FTL == 1 )) 00:48:29.095 + (( SPDK_TEST_NVME_FDP == 1 )) 00:48:29.095 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:48:29.095 + for nvme in "${!nvme_files[@]}" 00:48:29.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:48:29.095 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:48:29.095 + for nvme in "${!nvme_files[@]}" 00:48:29.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:48:29.095 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:48:29.095 + for nvme in "${!nvme_files[@]}" 00:48:29.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:48:29.095 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:48:29.095 + for nvme in "${!nvme_files[@]}" 00:48:29.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:48:29.095 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:48:29.095 + for nvme in "${!nvme_files[@]}" 00:48:29.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:48:29.095 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:48:29.095 + for nvme in "${!nvme_files[@]}" 00:48:29.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:48:29.095 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:48:29.095 + for nvme in "${!nvme_files[@]}" 00:48:29.095 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:48:29.354 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:48:29.354 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:48:29.354 + echo 'End stage prepare_nvme.sh' 00:48:29.354 End stage prepare_nvme.sh 00:48:29.375 [Pipeline] sh 00:48:29.657 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:48:29.657 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:48:29.657 00:48:29.657 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:48:29.657 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:48:29.657 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:48:29.657 HELP=0 00:48:29.657 DRY_RUN=0 00:48:29.657 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:48:29.657 NVME_DISKS_TYPE=nvme,nvme, 00:48:29.657 NVME_AUTO_CREATE=0 00:48:29.657 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:48:29.657 NVME_CMB=,, 00:48:29.657 NVME_PMR=,, 00:48:29.657 NVME_ZNS=,, 00:48:29.657 NVME_MS=,, 00:48:29.657 NVME_FDP=,, 00:48:29.657 SPDK_VAGRANT_DISTRO=fedora39 00:48:29.657 SPDK_VAGRANT_VMCPU=10 00:48:29.657 SPDK_VAGRANT_VMRAM=12288 00:48:29.657 SPDK_VAGRANT_PROVIDER=libvirt 00:48:29.657 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:48:29.657 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:48:29.657 SPDK_OPENSTACK_NETWORK=0 00:48:29.657 VAGRANT_PACKAGE_BOX=0 00:48:29.657 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:48:29.657 FORCE_DISTRO=true 00:48:29.657 VAGRANT_BOX_VERSION= 00:48:29.657 EXTRA_VAGRANTFILES= 00:48:29.657 NIC_MODEL=e1000 00:48:29.657 00:48:29.657 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:48:29.657 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:48:32.192 Bringing machine 'default' up with 'libvirt' provider... 00:48:32.758 ==> default: Creating image (snapshot of base box volume). 00:48:32.758 ==> default: Creating domain with the following settings... 00:48:32.758 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733723246_41b3b485099e99d4be67 00:48:32.758 ==> default: -- Domain type: kvm 00:48:32.758 ==> default: -- Cpus: 10 00:48:32.758 ==> default: -- Feature: acpi 00:48:32.758 ==> default: -- Feature: apic 00:48:32.758 ==> default: -- Feature: pae 00:48:32.758 ==> default: -- Memory: 12288M 00:48:32.758 ==> default: -- Memory Backing: hugepages: 00:48:32.758 ==> default: -- Management MAC: 00:48:32.758 ==> default: -- Loader: 00:48:32.758 ==> default: -- Nvram: 00:48:32.758 ==> default: -- Base box: spdk/fedora39 00:48:32.758 ==> default: -- Storage pool: default 00:48:32.758 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733723246_41b3b485099e99d4be67.img (20G) 00:48:32.758 ==> default: -- Volume Cache: default 00:48:32.758 ==> default: -- Kernel: 00:48:32.758 ==> default: -- Initrd: 00:48:32.758 ==> default: -- Graphics Type: vnc 00:48:32.758 ==> default: -- Graphics Port: -1 00:48:32.758 ==> default: -- Graphics IP: 127.0.0.1 00:48:32.758 ==> default: -- Graphics Password: Not defined 00:48:32.758 ==> default: -- Video Type: cirrus 00:48:32.758 ==> default: -- Video VRAM: 9216 00:48:32.758 ==> default: -- Sound Type: 00:48:32.758 ==> default: -- Keymap: en-us 00:48:32.758 ==> default: -- TPM Path: 00:48:32.758 ==> default: -- INPUT: type=mouse, bus=ps2 00:48:32.758 ==> default: -- Command line args: 00:48:32.758 ==> default: -> value=-device, 00:48:32.758 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:48:32.758 ==> default: -> value=-drive, 00:48:32.758 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:48:32.758 ==> default: -> value=-device, 00:48:32.758 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:48:32.758 ==> default: -> value=-device, 00:48:32.758 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:48:32.758 ==> default: -> value=-drive, 00:48:32.758 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:48:32.758 ==> default: -> value=-device, 00:48:32.758 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:48:32.758 ==> default: -> value=-drive, 00:48:32.758 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:48:32.758 ==> default: -> value=-device, 00:48:32.758 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:48:32.758 ==> default: -> value=-drive, 00:48:32.758 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:48:32.758 ==> default: -> value=-device, 00:48:32.758 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:48:32.758 ==> default: Creating shared folders metadata... 00:48:32.758 ==> default: Starting domain. 00:48:34.135 ==> default: Waiting for domain to get an IP address... 00:48:52.223 ==> default: Waiting for SSH to become available... 00:48:52.223 ==> default: Configuring and enabling network interfaces... 00:48:54.767 default: SSH address: 192.168.121.23:22 00:48:54.767 default: SSH username: vagrant 00:48:54.767 default: SSH auth method: private key 00:48:57.315 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:49:05.439 ==> default: Mounting SSHFS shared folder... 00:49:06.007 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:49:06.007 ==> default: Checking Mount.. 00:49:07.380 ==> default: Folder Successfully Mounted! 00:49:07.380 ==> default: Running provisioner: file... 00:49:08.314 default: ~/.gitconfig => .gitconfig 00:49:08.880 00:49:08.880 SUCCESS! 00:49:08.880 00:49:08.880 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:49:08.880 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:49:08.880 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:49:08.880 00:49:08.888 [Pipeline] } 00:49:08.898 [Pipeline] // stage 00:49:08.905 [Pipeline] dir 00:49:08.906 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:49:08.907 [Pipeline] { 00:49:08.916 [Pipeline] catchError 00:49:08.917 [Pipeline] { 00:49:08.926 [Pipeline] sh 00:49:09.199 + vagrant ssh-config --host vagrant 00:49:09.199 + sed -ne /^Host/,$p 00:49:09.199 + tee ssh_conf 00:49:11.731 Host vagrant 00:49:11.731 HostName 192.168.121.23 00:49:11.731 User vagrant 00:49:11.731 Port 22 00:49:11.731 UserKnownHostsFile /dev/null 00:49:11.731 StrictHostKeyChecking no 00:49:11.731 PasswordAuthentication no 00:49:11.731 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:49:11.731 IdentitiesOnly yes 00:49:11.731 LogLevel FATAL 00:49:11.731 ForwardAgent yes 00:49:11.731 ForwardX11 yes 00:49:11.731 00:49:11.745 [Pipeline] withEnv 00:49:11.747 [Pipeline] { 00:49:11.760 [Pipeline] sh 00:49:12.039 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:49:12.040 source /etc/os-release 00:49:12.040 [[ -e /image.version ]] && img=$(< /image.version) 00:49:12.040 # Minimal, systemd-like check. 00:49:12.040 if [[ -e /.dockerenv ]]; then 00:49:12.040 # Clear garbage from the node's name: 00:49:12.040 # agt-er_autotest_547-896 -> autotest_547-896 00:49:12.040 # $HOSTNAME is the actual container id 00:49:12.040 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:49:12.040 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:49:12.040 # We can assume this is a mount from a host where container is running, 00:49:12.040 # so fetch its hostname to easily identify the target swarm worker. 00:49:12.040 container="$(< /etc/hostname) ($agent)" 00:49:12.040 else 00:49:12.040 # Fallback 00:49:12.040 container=$agent 00:49:12.040 fi 00:49:12.040 fi 00:49:12.040 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:49:12.040 00:49:12.310 [Pipeline] } 00:49:12.328 [Pipeline] // withEnv 00:49:12.337 [Pipeline] setCustomBuildProperty 00:49:12.354 [Pipeline] stage 00:49:12.356 [Pipeline] { (Tests) 00:49:12.374 [Pipeline] sh 00:49:12.654 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:49:12.928 [Pipeline] sh 00:49:13.211 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:49:13.486 [Pipeline] timeout 00:49:13.486 Timeout set to expire in 1 hr 0 min 00:49:13.488 [Pipeline] { 00:49:13.503 [Pipeline] sh 00:49:13.784 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:49:14.352 HEAD is now at 15ce1ba92 lib/reduce: Send unmap to backing dev 00:49:14.365 [Pipeline] sh 00:49:14.644 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:49:14.918 [Pipeline] sh 00:49:15.198 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:49:15.474 [Pipeline] sh 00:49:15.775 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:49:16.064 ++ readlink -f spdk_repo 00:49:16.064 + DIR_ROOT=/home/vagrant/spdk_repo 00:49:16.064 + [[ -n /home/vagrant/spdk_repo ]] 00:49:16.064 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:49:16.064 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:49:16.064 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:49:16.064 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:49:16.064 + [[ -d /home/vagrant/spdk_repo/output ]] 00:49:16.064 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:49:16.064 + cd /home/vagrant/spdk_repo 00:49:16.064 + source /etc/os-release 00:49:16.064 ++ NAME='Fedora Linux' 00:49:16.064 ++ VERSION='39 (Cloud Edition)' 00:49:16.064 ++ ID=fedora 00:49:16.064 ++ VERSION_ID=39 00:49:16.064 ++ VERSION_CODENAME= 00:49:16.064 ++ PLATFORM_ID=platform:f39 00:49:16.064 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:49:16.064 ++ ANSI_COLOR='0;38;2;60;110;180' 00:49:16.064 ++ LOGO=fedora-logo-icon 00:49:16.064 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:49:16.064 ++ HOME_URL=https://fedoraproject.org/ 00:49:16.064 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:49:16.064 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:49:16.064 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:49:16.064 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:49:16.064 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:49:16.064 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:49:16.065 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:49:16.065 ++ SUPPORT_END=2024-11-12 00:49:16.065 ++ VARIANT='Cloud Edition' 00:49:16.065 ++ VARIANT_ID=cloud 00:49:16.065 + uname -a 00:49:16.065 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:49:16.065 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:49:16.332 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:49:16.332 Hugepages 00:49:16.332 node hugesize free / total 00:49:16.332 node0 1048576kB 0 / 0 00:49:16.332 node0 2048kB 0 / 0 00:49:16.332 00:49:16.332 Type BDF Vendor Device NUMA Driver Device Block devices 00:49:16.332 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:49:16.592 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:49:16.592 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:49:16.592 + rm -f /tmp/spdk-ld-path 00:49:16.592 + source autorun-spdk.conf 00:49:16.592 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:49:16.592 ++ SPDK_TEST_NVMF=1 00:49:16.592 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:49:16.592 ++ SPDK_TEST_USDT=1 00:49:16.592 ++ SPDK_TEST_NVMF_MDNS=1 00:49:16.592 ++ SPDK_RUN_UBSAN=1 00:49:16.592 ++ NET_TYPE=virt 00:49:16.592 ++ SPDK_JSONRPC_GO_CLIENT=1 00:49:16.592 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:49:16.592 ++ RUN_NIGHTLY=0 00:49:16.592 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:49:16.592 + [[ -n '' ]] 00:49:16.592 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:49:16.592 + for M in /var/spdk/build-*-manifest.txt 00:49:16.592 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:49:16.592 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:49:16.592 + for M in /var/spdk/build-*-manifest.txt 00:49:16.592 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:49:16.592 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:49:16.592 + for M in /var/spdk/build-*-manifest.txt 00:49:16.592 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:49:16.592 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:49:16.592 ++ uname 00:49:16.592 + [[ Linux == \L\i\n\u\x ]] 00:49:16.592 + sudo dmesg -T 00:49:16.592 + sudo dmesg --clear 00:49:16.592 + dmesg_pid=5260 00:49:16.592 + [[ Fedora Linux == FreeBSD ]] 00:49:16.592 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:49:16.592 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:49:16.592 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:49:16.592 + [[ -x /usr/src/fio-static/fio ]] 00:49:16.592 + sudo dmesg -Tw 00:49:16.592 + export FIO_BIN=/usr/src/fio-static/fio 00:49:16.592 + FIO_BIN=/usr/src/fio-static/fio 00:49:16.592 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:49:16.592 + [[ ! -v VFIO_QEMU_BIN ]] 00:49:16.592 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:49:16.592 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:49:16.592 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:49:16.592 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:49:16.592 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:49:16.592 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:49:16.592 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:49:16.592 05:48:11 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:49:16.592 05:48:11 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:49:16.592 05:48:11 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:49:16.592 05:48:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:49:16.592 05:48:11 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:49:16.852 05:48:11 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:49:16.852 05:48:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:16.852 05:48:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:49:16.852 05:48:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:49:16.852 05:48:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:16.852 05:48:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:16.852 05:48:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:16.852 05:48:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:16.852 05:48:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:16.852 05:48:11 -- paths/export.sh@5 -- $ export PATH 00:49:16.852 05:48:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:16.852 05:48:11 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:49:16.852 05:48:11 -- common/autobuild_common.sh@493 -- $ date +%s 00:49:16.852 05:48:11 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733723291.XXXXXX 00:49:16.852 05:48:11 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733723291.H13dsR 00:49:16.852 05:48:11 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:49:16.852 05:48:11 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:49:16.852 05:48:11 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:49:16.852 05:48:11 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:49:16.852 05:48:11 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:49:16.852 05:48:11 -- common/autobuild_common.sh@509 -- $ get_config_params 00:49:16.852 05:48:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:49:16.852 05:48:11 -- common/autotest_common.sh@10 -- $ set +x 00:49:16.852 05:48:11 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:49:16.852 05:48:11 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:49:16.852 05:48:11 -- pm/common@17 -- $ local monitor 00:49:16.852 05:48:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:16.852 05:48:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:16.852 05:48:11 -- pm/common@25 -- $ sleep 1 00:49:16.852 05:48:11 -- pm/common@21 -- $ date +%s 00:49:16.852 05:48:11 -- pm/common@21 -- $ date +%s 00:49:16.852 05:48:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733723291 00:49:16.852 05:48:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733723291 00:49:16.852 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733723291_collect-cpu-load.pm.log 00:49:16.852 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733723291_collect-vmstat.pm.log 00:49:17.790 05:48:12 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:49:17.790 05:48:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:49:17.790 05:48:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:49:17.790 05:48:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:49:17.790 05:48:12 -- spdk/autobuild.sh@16 -- $ date -u 00:49:17.790 Mon Dec 9 05:48:12 AM UTC 2024 00:49:17.790 05:48:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:49:17.790 v25.01-pre-305-g15ce1ba92 00:49:17.790 05:48:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:49:17.790 05:48:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:49:17.790 05:48:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:49:17.790 05:48:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:49:17.790 05:48:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:49:17.790 05:48:12 -- common/autotest_common.sh@10 -- $ set +x 00:49:17.790 ************************************ 00:49:17.790 START TEST ubsan 00:49:17.790 ************************************ 00:49:17.790 using ubsan 00:49:17.790 05:48:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:49:17.790 00:49:17.790 real 0m0.000s 00:49:17.790 user 0m0.000s 00:49:17.790 sys 0m0.000s 00:49:17.790 05:48:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:49:17.790 ************************************ 00:49:17.790 END TEST ubsan 00:49:17.790 ************************************ 00:49:17.790 05:48:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:49:17.790 05:48:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:49:17.790 05:48:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:49:17.790 05:48:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:49:17.790 05:48:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:49:17.790 05:48:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:49:17.790 05:48:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:49:17.790 05:48:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:49:17.790 05:48:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:49:17.790 05:48:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:49:18.049 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:49:18.049 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:49:18.614 Using 'verbs' RDMA provider 00:49:34.056 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:49:46.267 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:49:46.267 go version go1.21.1 linux/amd64 00:49:46.267 Creating mk/config.mk...done. 00:49:46.267 Creating mk/cc.flags.mk...done. 00:49:46.267 Type 'make' to build. 00:49:46.267 05:48:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:49:46.267 05:48:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:49:46.267 05:48:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:49:46.267 05:48:40 -- common/autotest_common.sh@10 -- $ set +x 00:49:46.267 ************************************ 00:49:46.267 START TEST make 00:49:46.267 ************************************ 00:49:46.267 05:48:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:49:46.525 make[1]: Nothing to be done for 'all'. 00:50:01.395 The Meson build system 00:50:01.395 Version: 1.5.0 00:50:01.395 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:50:01.395 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:50:01.395 Build type: native build 00:50:01.395 Program cat found: YES (/usr/bin/cat) 00:50:01.395 Project name: DPDK 00:50:01.395 Project version: 24.03.0 00:50:01.395 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:50:01.395 C linker for the host machine: cc ld.bfd 2.40-14 00:50:01.395 Host machine cpu family: x86_64 00:50:01.395 Host machine cpu: x86_64 00:50:01.395 Message: ## Building in Developer Mode ## 00:50:01.395 Program pkg-config found: YES (/usr/bin/pkg-config) 00:50:01.395 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:50:01.395 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:50:01.395 Program python3 found: YES (/usr/bin/python3) 00:50:01.395 Program cat found: YES (/usr/bin/cat) 00:50:01.395 Compiler for C supports arguments -march=native: YES 00:50:01.395 Checking for size of "void *" : 8 00:50:01.395 Checking for size of "void *" : 8 (cached) 00:50:01.395 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:50:01.395 Library m found: YES 00:50:01.395 Library numa found: YES 00:50:01.395 Has header "numaif.h" : YES 00:50:01.395 Library fdt found: NO 00:50:01.395 Library execinfo found: NO 00:50:01.395 Has header "execinfo.h" : YES 00:50:01.395 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:50:01.395 Run-time dependency libarchive found: NO (tried pkgconfig) 00:50:01.395 Run-time dependency libbsd found: NO (tried pkgconfig) 00:50:01.395 Run-time dependency jansson found: NO (tried pkgconfig) 00:50:01.395 Run-time dependency openssl found: YES 3.1.1 00:50:01.395 Run-time dependency libpcap found: YES 1.10.4 00:50:01.395 Has header "pcap.h" with dependency libpcap: YES 00:50:01.395 Compiler for C supports arguments -Wcast-qual: YES 00:50:01.395 Compiler for C supports arguments -Wdeprecated: YES 00:50:01.395 Compiler for C supports arguments -Wformat: YES 00:50:01.395 Compiler for C supports arguments -Wformat-nonliteral: NO 00:50:01.395 Compiler for C supports arguments -Wformat-security: NO 00:50:01.395 Compiler for C supports arguments -Wmissing-declarations: YES 00:50:01.395 Compiler for C supports arguments -Wmissing-prototypes: YES 00:50:01.395 Compiler for C supports arguments -Wnested-externs: YES 00:50:01.395 Compiler for C supports arguments -Wold-style-definition: YES 00:50:01.395 Compiler for C supports arguments -Wpointer-arith: YES 00:50:01.395 Compiler for C supports arguments -Wsign-compare: YES 00:50:01.395 Compiler for C supports arguments -Wstrict-prototypes: YES 00:50:01.395 Compiler for C supports arguments -Wundef: YES 00:50:01.395 Compiler for C supports arguments -Wwrite-strings: YES 00:50:01.395 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:50:01.395 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:50:01.395 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:50:01.395 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:50:01.395 Program objdump found: YES (/usr/bin/objdump) 00:50:01.395 Compiler for C supports arguments -mavx512f: YES 00:50:01.395 Checking if "AVX512 checking" compiles: YES 00:50:01.395 Fetching value of define "__SSE4_2__" : 1 00:50:01.395 Fetching value of define "__AES__" : 1 00:50:01.395 Fetching value of define "__AVX__" : 1 00:50:01.395 Fetching value of define "__AVX2__" : 1 00:50:01.395 Fetching value of define "__AVX512BW__" : (undefined) 00:50:01.395 Fetching value of define "__AVX512CD__" : (undefined) 00:50:01.395 Fetching value of define "__AVX512DQ__" : (undefined) 00:50:01.395 Fetching value of define "__AVX512F__" : (undefined) 00:50:01.395 Fetching value of define "__AVX512VL__" : (undefined) 00:50:01.395 Fetching value of define "__PCLMUL__" : 1 00:50:01.396 Fetching value of define "__RDRND__" : 1 00:50:01.396 Fetching value of define "__RDSEED__" : 1 00:50:01.396 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:50:01.396 Fetching value of define "__znver1__" : (undefined) 00:50:01.396 Fetching value of define "__znver2__" : (undefined) 00:50:01.396 Fetching value of define "__znver3__" : (undefined) 00:50:01.396 Fetching value of define "__znver4__" : (undefined) 00:50:01.396 Compiler for C supports arguments -Wno-format-truncation: YES 00:50:01.396 Message: lib/log: Defining dependency "log" 00:50:01.396 Message: lib/kvargs: Defining dependency "kvargs" 00:50:01.396 Message: lib/telemetry: Defining dependency "telemetry" 00:50:01.396 Checking for function "getentropy" : NO 00:50:01.396 Message: lib/eal: Defining dependency "eal" 00:50:01.396 Message: lib/ring: Defining dependency "ring" 00:50:01.396 Message: lib/rcu: Defining dependency "rcu" 00:50:01.396 Message: lib/mempool: Defining dependency "mempool" 00:50:01.396 Message: lib/mbuf: Defining dependency "mbuf" 00:50:01.396 Fetching value of define "__PCLMUL__" : 1 (cached) 00:50:01.396 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:50:01.396 Compiler for C supports arguments -mpclmul: YES 00:50:01.396 Compiler for C supports arguments -maes: YES 00:50:01.396 Compiler for C supports arguments -mavx512f: YES (cached) 00:50:01.396 Compiler for C supports arguments -mavx512bw: YES 00:50:01.396 Compiler for C supports arguments -mavx512dq: YES 00:50:01.396 Compiler for C supports arguments -mavx512vl: YES 00:50:01.396 Compiler for C supports arguments -mvpclmulqdq: YES 00:50:01.396 Compiler for C supports arguments -mavx2: YES 00:50:01.396 Compiler for C supports arguments -mavx: YES 00:50:01.396 Message: lib/net: Defining dependency "net" 00:50:01.396 Message: lib/meter: Defining dependency "meter" 00:50:01.396 Message: lib/ethdev: Defining dependency "ethdev" 00:50:01.396 Message: lib/pci: Defining dependency "pci" 00:50:01.396 Message: lib/cmdline: Defining dependency "cmdline" 00:50:01.396 Message: lib/hash: Defining dependency "hash" 00:50:01.396 Message: lib/timer: Defining dependency "timer" 00:50:01.396 Message: lib/compressdev: Defining dependency "compressdev" 00:50:01.396 Message: lib/cryptodev: Defining dependency "cryptodev" 00:50:01.396 Message: lib/dmadev: Defining dependency "dmadev" 00:50:01.396 Compiler for C supports arguments -Wno-cast-qual: YES 00:50:01.396 Message: lib/power: Defining dependency "power" 00:50:01.396 Message: lib/reorder: Defining dependency "reorder" 00:50:01.396 Message: lib/security: Defining dependency "security" 00:50:01.396 Has header "linux/userfaultfd.h" : YES 00:50:01.396 Has header "linux/vduse.h" : YES 00:50:01.396 Message: lib/vhost: Defining dependency "vhost" 00:50:01.396 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:50:01.396 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:50:01.396 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:50:01.396 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:50:01.396 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:50:01.396 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:50:01.396 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:50:01.396 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:50:01.396 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:50:01.396 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:50:01.396 Program doxygen found: YES (/usr/local/bin/doxygen) 00:50:01.396 Configuring doxy-api-html.conf using configuration 00:50:01.396 Configuring doxy-api-man.conf using configuration 00:50:01.396 Program mandb found: YES (/usr/bin/mandb) 00:50:01.396 Program sphinx-build found: NO 00:50:01.396 Configuring rte_build_config.h using configuration 00:50:01.396 Message: 00:50:01.396 ================= 00:50:01.396 Applications Enabled 00:50:01.396 ================= 00:50:01.396 00:50:01.396 apps: 00:50:01.396 00:50:01.396 00:50:01.396 Message: 00:50:01.396 ================= 00:50:01.396 Libraries Enabled 00:50:01.396 ================= 00:50:01.396 00:50:01.396 libs: 00:50:01.396 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:50:01.396 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:50:01.396 cryptodev, dmadev, power, reorder, security, vhost, 00:50:01.396 00:50:01.396 Message: 00:50:01.396 =============== 00:50:01.396 Drivers Enabled 00:50:01.396 =============== 00:50:01.396 00:50:01.396 common: 00:50:01.396 00:50:01.396 bus: 00:50:01.396 pci, vdev, 00:50:01.396 mempool: 00:50:01.396 ring, 00:50:01.396 dma: 00:50:01.396 00:50:01.396 net: 00:50:01.396 00:50:01.396 crypto: 00:50:01.396 00:50:01.396 compress: 00:50:01.396 00:50:01.396 vdpa: 00:50:01.396 00:50:01.396 00:50:01.396 Message: 00:50:01.396 ================= 00:50:01.396 Content Skipped 00:50:01.396 ================= 00:50:01.396 00:50:01.396 apps: 00:50:01.396 dumpcap: explicitly disabled via build config 00:50:01.396 graph: explicitly disabled via build config 00:50:01.396 pdump: explicitly disabled via build config 00:50:01.396 proc-info: explicitly disabled via build config 00:50:01.396 test-acl: explicitly disabled via build config 00:50:01.396 test-bbdev: explicitly disabled via build config 00:50:01.396 test-cmdline: explicitly disabled via build config 00:50:01.396 test-compress-perf: explicitly disabled via build config 00:50:01.396 test-crypto-perf: explicitly disabled via build config 00:50:01.396 test-dma-perf: explicitly disabled via build config 00:50:01.396 test-eventdev: explicitly disabled via build config 00:50:01.396 test-fib: explicitly disabled via build config 00:50:01.396 test-flow-perf: explicitly disabled via build config 00:50:01.396 test-gpudev: explicitly disabled via build config 00:50:01.396 test-mldev: explicitly disabled via build config 00:50:01.396 test-pipeline: explicitly disabled via build config 00:50:01.396 test-pmd: explicitly disabled via build config 00:50:01.396 test-regex: explicitly disabled via build config 00:50:01.396 test-sad: explicitly disabled via build config 00:50:01.396 test-security-perf: explicitly disabled via build config 00:50:01.396 00:50:01.396 libs: 00:50:01.396 argparse: explicitly disabled via build config 00:50:01.396 metrics: explicitly disabled via build config 00:50:01.396 acl: explicitly disabled via build config 00:50:01.396 bbdev: explicitly disabled via build config 00:50:01.396 bitratestats: explicitly disabled via build config 00:50:01.396 bpf: explicitly disabled via build config 00:50:01.396 cfgfile: explicitly disabled via build config 00:50:01.396 distributor: explicitly disabled via build config 00:50:01.396 efd: explicitly disabled via build config 00:50:01.396 eventdev: explicitly disabled via build config 00:50:01.396 dispatcher: explicitly disabled via build config 00:50:01.396 gpudev: explicitly disabled via build config 00:50:01.396 gro: explicitly disabled via build config 00:50:01.396 gso: explicitly disabled via build config 00:50:01.396 ip_frag: explicitly disabled via build config 00:50:01.396 jobstats: explicitly disabled via build config 00:50:01.396 latencystats: explicitly disabled via build config 00:50:01.396 lpm: explicitly disabled via build config 00:50:01.396 member: explicitly disabled via build config 00:50:01.396 pcapng: explicitly disabled via build config 00:50:01.396 rawdev: explicitly disabled via build config 00:50:01.396 regexdev: explicitly disabled via build config 00:50:01.396 mldev: explicitly disabled via build config 00:50:01.396 rib: explicitly disabled via build config 00:50:01.396 sched: explicitly disabled via build config 00:50:01.396 stack: explicitly disabled via build config 00:50:01.396 ipsec: explicitly disabled via build config 00:50:01.396 pdcp: explicitly disabled via build config 00:50:01.396 fib: explicitly disabled via build config 00:50:01.396 port: explicitly disabled via build config 00:50:01.396 pdump: explicitly disabled via build config 00:50:01.396 table: explicitly disabled via build config 00:50:01.396 pipeline: explicitly disabled via build config 00:50:01.396 graph: explicitly disabled via build config 00:50:01.396 node: explicitly disabled via build config 00:50:01.396 00:50:01.396 drivers: 00:50:01.396 common/cpt: not in enabled drivers build config 00:50:01.396 common/dpaax: not in enabled drivers build config 00:50:01.396 common/iavf: not in enabled drivers build config 00:50:01.396 common/idpf: not in enabled drivers build config 00:50:01.396 common/ionic: not in enabled drivers build config 00:50:01.396 common/mvep: not in enabled drivers build config 00:50:01.396 common/octeontx: not in enabled drivers build config 00:50:01.396 bus/auxiliary: not in enabled drivers build config 00:50:01.396 bus/cdx: not in enabled drivers build config 00:50:01.396 bus/dpaa: not in enabled drivers build config 00:50:01.396 bus/fslmc: not in enabled drivers build config 00:50:01.396 bus/ifpga: not in enabled drivers build config 00:50:01.396 bus/platform: not in enabled drivers build config 00:50:01.396 bus/uacce: not in enabled drivers build config 00:50:01.396 bus/vmbus: not in enabled drivers build config 00:50:01.396 common/cnxk: not in enabled drivers build config 00:50:01.396 common/mlx5: not in enabled drivers build config 00:50:01.396 common/nfp: not in enabled drivers build config 00:50:01.396 common/nitrox: not in enabled drivers build config 00:50:01.396 common/qat: not in enabled drivers build config 00:50:01.396 common/sfc_efx: not in enabled drivers build config 00:50:01.396 mempool/bucket: not in enabled drivers build config 00:50:01.396 mempool/cnxk: not in enabled drivers build config 00:50:01.396 mempool/dpaa: not in enabled drivers build config 00:50:01.396 mempool/dpaa2: not in enabled drivers build config 00:50:01.396 mempool/octeontx: not in enabled drivers build config 00:50:01.396 mempool/stack: not in enabled drivers build config 00:50:01.396 dma/cnxk: not in enabled drivers build config 00:50:01.396 dma/dpaa: not in enabled drivers build config 00:50:01.396 dma/dpaa2: not in enabled drivers build config 00:50:01.396 dma/hisilicon: not in enabled drivers build config 00:50:01.396 dma/idxd: not in enabled drivers build config 00:50:01.396 dma/ioat: not in enabled drivers build config 00:50:01.396 dma/skeleton: not in enabled drivers build config 00:50:01.396 net/af_packet: not in enabled drivers build config 00:50:01.396 net/af_xdp: not in enabled drivers build config 00:50:01.396 net/ark: not in enabled drivers build config 00:50:01.396 net/atlantic: not in enabled drivers build config 00:50:01.396 net/avp: not in enabled drivers build config 00:50:01.396 net/axgbe: not in enabled drivers build config 00:50:01.396 net/bnx2x: not in enabled drivers build config 00:50:01.396 net/bnxt: not in enabled drivers build config 00:50:01.396 net/bonding: not in enabled drivers build config 00:50:01.396 net/cnxk: not in enabled drivers build config 00:50:01.396 net/cpfl: not in enabled drivers build config 00:50:01.396 net/cxgbe: not in enabled drivers build config 00:50:01.396 net/dpaa: not in enabled drivers build config 00:50:01.396 net/dpaa2: not in enabled drivers build config 00:50:01.396 net/e1000: not in enabled drivers build config 00:50:01.396 net/ena: not in enabled drivers build config 00:50:01.396 net/enetc: not in enabled drivers build config 00:50:01.396 net/enetfec: not in enabled drivers build config 00:50:01.396 net/enic: not in enabled drivers build config 00:50:01.396 net/failsafe: not in enabled drivers build config 00:50:01.396 net/fm10k: not in enabled drivers build config 00:50:01.396 net/gve: not in enabled drivers build config 00:50:01.396 net/hinic: not in enabled drivers build config 00:50:01.396 net/hns3: not in enabled drivers build config 00:50:01.396 net/i40e: not in enabled drivers build config 00:50:01.396 net/iavf: not in enabled drivers build config 00:50:01.396 net/ice: not in enabled drivers build config 00:50:01.396 net/idpf: not in enabled drivers build config 00:50:01.396 net/igc: not in enabled drivers build config 00:50:01.396 net/ionic: not in enabled drivers build config 00:50:01.396 net/ipn3ke: not in enabled drivers build config 00:50:01.396 net/ixgbe: not in enabled drivers build config 00:50:01.396 net/mana: not in enabled drivers build config 00:50:01.396 net/memif: not in enabled drivers build config 00:50:01.396 net/mlx4: not in enabled drivers build config 00:50:01.396 net/mlx5: not in enabled drivers build config 00:50:01.396 net/mvneta: not in enabled drivers build config 00:50:01.396 net/mvpp2: not in enabled drivers build config 00:50:01.396 net/netvsc: not in enabled drivers build config 00:50:01.396 net/nfb: not in enabled drivers build config 00:50:01.396 net/nfp: not in enabled drivers build config 00:50:01.396 net/ngbe: not in enabled drivers build config 00:50:01.396 net/null: not in enabled drivers build config 00:50:01.396 net/octeontx: not in enabled drivers build config 00:50:01.396 net/octeon_ep: not in enabled drivers build config 00:50:01.396 net/pcap: not in enabled drivers build config 00:50:01.396 net/pfe: not in enabled drivers build config 00:50:01.396 net/qede: not in enabled drivers build config 00:50:01.396 net/ring: not in enabled drivers build config 00:50:01.396 net/sfc: not in enabled drivers build config 00:50:01.396 net/softnic: not in enabled drivers build config 00:50:01.396 net/tap: not in enabled drivers build config 00:50:01.396 net/thunderx: not in enabled drivers build config 00:50:01.396 net/txgbe: not in enabled drivers build config 00:50:01.396 net/vdev_netvsc: not in enabled drivers build config 00:50:01.396 net/vhost: not in enabled drivers build config 00:50:01.396 net/virtio: not in enabled drivers build config 00:50:01.396 net/vmxnet3: not in enabled drivers build config 00:50:01.396 raw/*: missing internal dependency, "rawdev" 00:50:01.396 crypto/armv8: not in enabled drivers build config 00:50:01.396 crypto/bcmfs: not in enabled drivers build config 00:50:01.396 crypto/caam_jr: not in enabled drivers build config 00:50:01.396 crypto/ccp: not in enabled drivers build config 00:50:01.396 crypto/cnxk: not in enabled drivers build config 00:50:01.396 crypto/dpaa_sec: not in enabled drivers build config 00:50:01.396 crypto/dpaa2_sec: not in enabled drivers build config 00:50:01.396 crypto/ipsec_mb: not in enabled drivers build config 00:50:01.396 crypto/mlx5: not in enabled drivers build config 00:50:01.396 crypto/mvsam: not in enabled drivers build config 00:50:01.396 crypto/nitrox: not in enabled drivers build config 00:50:01.396 crypto/null: not in enabled drivers build config 00:50:01.396 crypto/octeontx: not in enabled drivers build config 00:50:01.396 crypto/openssl: not in enabled drivers build config 00:50:01.396 crypto/scheduler: not in enabled drivers build config 00:50:01.396 crypto/uadk: not in enabled drivers build config 00:50:01.396 crypto/virtio: not in enabled drivers build config 00:50:01.396 compress/isal: not in enabled drivers build config 00:50:01.396 compress/mlx5: not in enabled drivers build config 00:50:01.396 compress/nitrox: not in enabled drivers build config 00:50:01.396 compress/octeontx: not in enabled drivers build config 00:50:01.396 compress/zlib: not in enabled drivers build config 00:50:01.396 regex/*: missing internal dependency, "regexdev" 00:50:01.396 ml/*: missing internal dependency, "mldev" 00:50:01.396 vdpa/ifc: not in enabled drivers build config 00:50:01.396 vdpa/mlx5: not in enabled drivers build config 00:50:01.396 vdpa/nfp: not in enabled drivers build config 00:50:01.396 vdpa/sfc: not in enabled drivers build config 00:50:01.396 event/*: missing internal dependency, "eventdev" 00:50:01.396 baseband/*: missing internal dependency, "bbdev" 00:50:01.396 gpu/*: missing internal dependency, "gpudev" 00:50:01.396 00:50:01.396 00:50:01.396 Build targets in project: 85 00:50:01.396 00:50:01.396 DPDK 24.03.0 00:50:01.396 00:50:01.396 User defined options 00:50:01.396 buildtype : debug 00:50:01.396 default_library : shared 00:50:01.396 libdir : lib 00:50:01.396 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:50:01.396 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:50:01.396 c_link_args : 00:50:01.396 cpu_instruction_set: native 00:50:01.396 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:50:01.396 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:50:01.396 enable_docs : false 00:50:01.396 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:50:01.396 enable_kmods : false 00:50:01.396 max_lcores : 128 00:50:01.396 tests : false 00:50:01.396 00:50:01.396 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:50:01.396 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:50:01.396 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:50:01.396 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:50:01.396 [3/268] Linking static target lib/librte_kvargs.a 00:50:01.396 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:50:01.396 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:50:01.396 [6/268] Linking static target lib/librte_log.a 00:50:01.969 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:50:01.969 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:50:01.969 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:50:01.969 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:50:01.969 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:50:01.969 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:50:02.228 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:50:02.228 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:50:02.228 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:50:02.228 [16/268] Linking static target lib/librte_telemetry.a 00:50:02.487 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:50:02.487 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:50:02.487 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:50:02.487 [20/268] Linking target lib/librte_log.so.24.1 00:50:02.745 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:50:02.745 [22/268] Linking target lib/librte_kvargs.so.24.1 00:50:02.745 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:50:03.005 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:50:03.005 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:50:03.005 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:50:03.005 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:50:03.005 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:50:03.005 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:50:03.005 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:50:03.264 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:50:03.264 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:50:03.264 [33/268] Linking target lib/librte_telemetry.so.24.1 00:50:03.522 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:50:03.522 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:50:03.522 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:50:03.522 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:50:04.113 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:50:04.113 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:50:04.113 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:50:04.113 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:50:04.113 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:50:04.113 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:50:04.113 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:50:04.113 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:50:04.113 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:50:04.371 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:50:04.371 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:50:04.629 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:50:04.629 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:50:04.629 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:50:04.887 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:50:04.887 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:50:04.887 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:50:04.887 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:50:04.887 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:50:05.145 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:50:05.145 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:50:05.145 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:50:05.403 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:50:05.403 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:50:05.661 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:50:05.661 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:50:05.661 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:50:05.919 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:50:05.919 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:50:05.919 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:50:05.919 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:50:05.919 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:50:06.177 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:50:06.177 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:50:06.435 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:50:06.435 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:50:06.435 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:50:06.435 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:50:06.435 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:50:06.435 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:50:06.435 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:50:06.435 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:50:06.693 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:50:06.693 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:50:06.693 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:50:06.951 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:50:06.951 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:50:07.209 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:50:07.209 [86/268] Linking static target lib/librte_eal.a 00:50:07.209 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:50:07.209 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:50:07.209 [89/268] Linking static target lib/librte_ring.a 00:50:07.468 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:50:07.468 [91/268] Linking static target lib/librte_rcu.a 00:50:07.468 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:50:07.468 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:50:07.468 [94/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:50:07.468 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:50:07.727 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:50:07.727 [97/268] Linking static target lib/librte_mempool.a 00:50:07.727 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:50:07.727 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:50:07.727 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:50:07.727 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:50:07.727 [102/268] Linking static target lib/librte_mbuf.a 00:50:07.986 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:50:08.244 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:50:08.244 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:50:08.244 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:50:08.244 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:50:08.244 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:50:08.244 [109/268] Linking static target lib/librte_net.a 00:50:08.503 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:50:08.503 [111/268] Linking static target lib/librte_meter.a 00:50:08.761 [112/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:50:08.761 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:50:08.761 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:50:08.761 [115/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:50:08.761 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:50:09.020 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:50:09.020 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:50:09.020 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:50:09.588 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:50:09.588 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:50:09.588 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:50:09.847 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:50:09.847 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:50:10.107 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:50:10.107 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:50:10.107 [127/268] Linking static target lib/librte_pci.a 00:50:10.107 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:50:10.107 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:50:10.107 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:50:10.366 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:50:10.366 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:50:10.366 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:50:10.366 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:50:10.366 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:50:10.366 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:50:10.366 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:50:10.625 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:50:10.625 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:50:10.625 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:50:10.625 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:50:10.625 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:50:10.625 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:50:10.625 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:50:10.625 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:50:10.625 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:50:10.625 [147/268] Linking static target lib/librte_ethdev.a 00:50:10.883 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:50:11.140 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:50:11.140 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:50:11.140 [151/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:50:11.140 [152/268] Linking static target lib/librte_cmdline.a 00:50:11.140 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:50:11.140 [154/268] Linking static target lib/librte_timer.a 00:50:11.398 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:50:11.398 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:50:11.398 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:50:11.656 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:50:11.656 [159/268] Linking static target lib/librte_hash.a 00:50:11.656 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:50:11.914 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:50:11.914 [162/268] Linking static target lib/librte_compressdev.a 00:50:11.914 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:50:11.914 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:50:12.181 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:50:12.181 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:50:12.461 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:50:12.461 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:50:12.461 [169/268] Linking static target lib/librte_dmadev.a 00:50:12.730 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:50:12.730 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:50:12.730 [172/268] Linking static target lib/librte_cryptodev.a 00:50:12.730 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:50:12.730 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:50:12.730 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:50:12.988 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:12.988 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:50:12.988 [178/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:50:13.246 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:13.246 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:50:13.504 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:50:13.504 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:50:13.504 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:50:13.504 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:50:13.761 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:50:13.761 [186/268] Linking static target lib/librte_power.a 00:50:13.761 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:50:13.761 [188/268] Linking static target lib/librte_reorder.a 00:50:14.019 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:50:14.019 [190/268] Linking static target lib/librte_security.a 00:50:14.277 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:50:14.277 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:50:14.277 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:50:14.277 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:50:14.534 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:50:14.792 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:50:14.792 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:50:15.050 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:50:15.050 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:50:15.050 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:50:15.309 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:15.309 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:50:15.567 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:50:15.567 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:50:15.567 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:50:15.567 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:50:15.829 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:50:16.088 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:50:16.088 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:50:16.088 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:50:16.089 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:50:16.089 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:50:16.348 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:50:16.348 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:50:16.348 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:50:16.348 [216/268] Linking static target drivers/librte_bus_vdev.a 00:50:16.348 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:50:16.348 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:50:16.348 [219/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:50:16.348 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:50:16.348 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:50:16.348 [222/268] Linking static target drivers/librte_bus_pci.a 00:50:16.606 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:50:16.606 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:50:16.606 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:50:16.606 [226/268] Linking static target drivers/librte_mempool_ring.a 00:50:16.606 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:16.866 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:50:17.434 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:50:17.434 [230/268] Linking static target lib/librte_vhost.a 00:50:18.369 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:50:18.369 [232/268] Linking target lib/librte_eal.so.24.1 00:50:18.369 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:50:18.369 [234/268] Linking target lib/librte_meter.so.24.1 00:50:18.369 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:50:18.369 [236/268] Linking target lib/librte_ring.so.24.1 00:50:18.369 [237/268] Linking target lib/librte_dmadev.so.24.1 00:50:18.369 [238/268] Linking target lib/librte_pci.so.24.1 00:50:18.369 [239/268] Linking target lib/librte_timer.so.24.1 00:50:18.627 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:50:18.627 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:50:18.627 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:50:18.627 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:50:18.627 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:50:18.627 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:50:18.627 [246/268] Linking target lib/librte_mempool.so.24.1 00:50:18.627 [247/268] Linking target lib/librte_rcu.so.24.1 00:50:18.627 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:18.627 [249/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:50:18.886 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:50:18.886 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:50:18.886 [252/268] Linking target lib/librte_mbuf.so.24.1 00:50:18.886 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:50:18.886 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:50:18.886 [255/268] Linking target lib/librte_net.so.24.1 00:50:18.886 [256/268] Linking target lib/librte_reorder.so.24.1 00:50:18.886 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:50:18.886 [258/268] Linking target lib/librte_compressdev.so.24.1 00:50:19.144 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:50:19.144 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:50:19.144 [261/268] Linking target lib/librte_hash.so.24.1 00:50:19.144 [262/268] Linking target lib/librte_cmdline.so.24.1 00:50:19.144 [263/268] Linking target lib/librte_security.so.24.1 00:50:19.144 [264/268] Linking target lib/librte_ethdev.so.24.1 00:50:19.403 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:50:19.403 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:50:19.403 [267/268] Linking target lib/librte_power.so.24.1 00:50:19.403 [268/268] Linking target lib/librte_vhost.so.24.1 00:50:19.403 INFO: autodetecting backend as ninja 00:50:19.403 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:50:41.339 CC lib/ut_mock/mock.o 00:50:41.339 CC lib/ut/ut.o 00:50:41.339 CC lib/log/log.o 00:50:41.339 CC lib/log/log_flags.o 00:50:41.339 CC lib/log/log_deprecated.o 00:50:41.339 LIB libspdk_ut_mock.a 00:50:41.339 LIB libspdk_ut.a 00:50:41.339 LIB libspdk_log.a 00:50:41.340 SO libspdk_ut.so.2.0 00:50:41.340 SO libspdk_ut_mock.so.6.0 00:50:41.340 SO libspdk_log.so.7.1 00:50:41.340 SYMLINK libspdk_ut.so 00:50:41.340 SYMLINK libspdk_ut_mock.so 00:50:41.340 SYMLINK libspdk_log.so 00:50:41.340 CXX lib/trace_parser/trace.o 00:50:41.340 CC lib/util/base64.o 00:50:41.340 CC lib/ioat/ioat.o 00:50:41.340 CC lib/util/bit_array.o 00:50:41.340 CC lib/util/cpuset.o 00:50:41.340 CC lib/util/crc16.o 00:50:41.340 CC lib/util/crc32c.o 00:50:41.340 CC lib/util/crc32.o 00:50:41.340 CC lib/dma/dma.o 00:50:41.597 CC lib/vfio_user/host/vfio_user_pci.o 00:50:41.597 CC lib/vfio_user/host/vfio_user.o 00:50:41.597 CC lib/util/crc32_ieee.o 00:50:41.597 CC lib/util/crc64.o 00:50:41.597 CC lib/util/dif.o 00:50:41.597 LIB libspdk_dma.a 00:50:41.597 CC lib/util/fd.o 00:50:41.597 CC lib/util/fd_group.o 00:50:41.855 SO libspdk_dma.so.5.0 00:50:41.855 LIB libspdk_ioat.a 00:50:41.855 CC lib/util/file.o 00:50:41.855 SYMLINK libspdk_dma.so 00:50:41.855 CC lib/util/hexlify.o 00:50:41.855 CC lib/util/iov.o 00:50:41.855 SO libspdk_ioat.so.7.0 00:50:41.855 CC lib/util/math.o 00:50:41.855 CC lib/util/net.o 00:50:41.855 LIB libspdk_vfio_user.a 00:50:41.855 SYMLINK libspdk_ioat.so 00:50:41.855 CC lib/util/pipe.o 00:50:41.855 SO libspdk_vfio_user.so.5.0 00:50:41.855 CC lib/util/strerror_tls.o 00:50:42.113 CC lib/util/string.o 00:50:42.113 SYMLINK libspdk_vfio_user.so 00:50:42.113 CC lib/util/uuid.o 00:50:42.113 CC lib/util/xor.o 00:50:42.113 CC lib/util/zipf.o 00:50:42.113 CC lib/util/md5.o 00:50:42.372 LIB libspdk_util.a 00:50:42.372 SO libspdk_util.so.10.1 00:50:42.372 SYMLINK libspdk_util.so 00:50:42.630 LIB libspdk_trace_parser.a 00:50:42.630 SO libspdk_trace_parser.so.6.0 00:50:42.630 SYMLINK libspdk_trace_parser.so 00:50:42.630 CC lib/rdma_utils/rdma_utils.o 00:50:42.630 CC lib/conf/conf.o 00:50:42.630 CC lib/vmd/vmd.o 00:50:42.630 CC lib/vmd/led.o 00:50:42.630 CC lib/json/json_parse.o 00:50:42.630 CC lib/env_dpdk/env.o 00:50:42.630 CC lib/json/json_util.o 00:50:42.630 CC lib/json/json_write.o 00:50:42.630 CC lib/env_dpdk/memory.o 00:50:42.630 CC lib/idxd/idxd.o 00:50:42.888 CC lib/idxd/idxd_user.o 00:50:42.888 LIB libspdk_conf.a 00:50:42.888 CC lib/idxd/idxd_kernel.o 00:50:42.888 CC lib/env_dpdk/pci.o 00:50:42.888 SO libspdk_conf.so.6.0 00:50:42.888 LIB libspdk_rdma_utils.a 00:50:42.888 LIB libspdk_json.a 00:50:42.888 SO libspdk_rdma_utils.so.1.0 00:50:42.888 SYMLINK libspdk_conf.so 00:50:42.888 SO libspdk_json.so.6.0 00:50:43.145 CC lib/env_dpdk/init.o 00:50:43.145 SYMLINK libspdk_rdma_utils.so 00:50:43.145 CC lib/env_dpdk/threads.o 00:50:43.145 SYMLINK libspdk_json.so 00:50:43.145 CC lib/env_dpdk/pci_ioat.o 00:50:43.145 CC lib/env_dpdk/pci_virtio.o 00:50:43.145 CC lib/rdma_provider/common.o 00:50:43.145 CC lib/env_dpdk/pci_vmd.o 00:50:43.145 CC lib/jsonrpc/jsonrpc_server.o 00:50:43.145 LIB libspdk_idxd.a 00:50:43.401 SO libspdk_idxd.so.12.1 00:50:43.401 LIB libspdk_vmd.a 00:50:43.401 CC lib/env_dpdk/pci_idxd.o 00:50:43.401 SO libspdk_vmd.so.6.0 00:50:43.401 SYMLINK libspdk_idxd.so 00:50:43.401 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:50:43.401 CC lib/jsonrpc/jsonrpc_client.o 00:50:43.401 CC lib/rdma_provider/rdma_provider_verbs.o 00:50:43.401 SYMLINK libspdk_vmd.so 00:50:43.401 CC lib/env_dpdk/pci_event.o 00:50:43.402 CC lib/env_dpdk/sigbus_handler.o 00:50:43.402 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:50:43.402 CC lib/env_dpdk/pci_dpdk.o 00:50:43.402 CC lib/env_dpdk/pci_dpdk_2207.o 00:50:43.402 CC lib/env_dpdk/pci_dpdk_2211.o 00:50:43.660 LIB libspdk_rdma_provider.a 00:50:43.660 SO libspdk_rdma_provider.so.7.0 00:50:43.660 LIB libspdk_jsonrpc.a 00:50:43.660 SO libspdk_jsonrpc.so.6.0 00:50:43.660 SYMLINK libspdk_rdma_provider.so 00:50:43.660 SYMLINK libspdk_jsonrpc.so 00:50:43.918 CC lib/rpc/rpc.o 00:50:44.177 LIB libspdk_env_dpdk.a 00:50:44.177 SO libspdk_env_dpdk.so.15.1 00:50:44.177 LIB libspdk_rpc.a 00:50:44.177 SO libspdk_rpc.so.6.0 00:50:44.177 SYMLINK libspdk_env_dpdk.so 00:50:44.177 SYMLINK libspdk_rpc.so 00:50:44.461 CC lib/trace/trace.o 00:50:44.461 CC lib/trace/trace_flags.o 00:50:44.461 CC lib/trace/trace_rpc.o 00:50:44.461 CC lib/keyring/keyring_rpc.o 00:50:44.461 CC lib/keyring/keyring.o 00:50:44.461 CC lib/notify/notify.o 00:50:44.461 CC lib/notify/notify_rpc.o 00:50:44.718 LIB libspdk_notify.a 00:50:44.718 SO libspdk_notify.so.6.0 00:50:44.718 LIB libspdk_keyring.a 00:50:44.718 SYMLINK libspdk_notify.so 00:50:44.718 LIB libspdk_trace.a 00:50:44.718 SO libspdk_keyring.so.2.0 00:50:44.976 SO libspdk_trace.so.11.0 00:50:44.976 SYMLINK libspdk_keyring.so 00:50:44.976 SYMLINK libspdk_trace.so 00:50:45.233 CC lib/sock/sock.o 00:50:45.233 CC lib/sock/sock_rpc.o 00:50:45.233 CC lib/thread/iobuf.o 00:50:45.233 CC lib/thread/thread.o 00:50:45.490 LIB libspdk_sock.a 00:50:45.490 SO libspdk_sock.so.10.0 00:50:45.490 SYMLINK libspdk_sock.so 00:50:45.747 CC lib/nvme/nvme_ctrlr_cmd.o 00:50:45.747 CC lib/nvme/nvme_fabric.o 00:50:45.747 CC lib/nvme/nvme_ctrlr.o 00:50:45.747 CC lib/nvme/nvme_ns_cmd.o 00:50:45.747 CC lib/nvme/nvme_ns.o 00:50:45.747 CC lib/nvme/nvme_qpair.o 00:50:45.747 CC lib/nvme/nvme_pcie.o 00:50:45.747 CC lib/nvme/nvme_pcie_common.o 00:50:45.747 CC lib/nvme/nvme.o 00:50:46.679 CC lib/nvme/nvme_quirks.o 00:50:46.679 CC lib/nvme/nvme_transport.o 00:50:46.679 CC lib/nvme/nvme_discovery.o 00:50:46.679 LIB libspdk_thread.a 00:50:46.679 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:50:46.679 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:50:46.679 SO libspdk_thread.so.11.0 00:50:46.679 CC lib/nvme/nvme_tcp.o 00:50:46.937 SYMLINK libspdk_thread.so 00:50:46.937 CC lib/nvme/nvme_opal.o 00:50:46.937 CC lib/nvme/nvme_io_msg.o 00:50:47.195 CC lib/accel/accel.o 00:50:47.195 CC lib/accel/accel_rpc.o 00:50:47.453 CC lib/accel/accel_sw.o 00:50:47.453 CC lib/nvme/nvme_poll_group.o 00:50:47.453 CC lib/nvme/nvme_zns.o 00:50:47.453 CC lib/nvme/nvme_stubs.o 00:50:47.453 CC lib/nvme/nvme_auth.o 00:50:47.453 CC lib/nvme/nvme_cuse.o 00:50:47.453 CC lib/nvme/nvme_rdma.o 00:50:47.712 CC lib/blob/blobstore.o 00:50:48.279 CC lib/init/json_config.o 00:50:48.279 CC lib/virtio/virtio.o 00:50:48.279 CC lib/blob/request.o 00:50:48.279 CC lib/fsdev/fsdev.o 00:50:48.279 LIB libspdk_accel.a 00:50:48.279 CC lib/fsdev/fsdev_io.o 00:50:48.279 SO libspdk_accel.so.16.0 00:50:48.279 CC lib/init/subsystem.o 00:50:48.538 SYMLINK libspdk_accel.so 00:50:48.538 CC lib/virtio/virtio_vhost_user.o 00:50:48.538 CC lib/virtio/virtio_vfio_user.o 00:50:48.538 CC lib/fsdev/fsdev_rpc.o 00:50:48.538 CC lib/init/subsystem_rpc.o 00:50:48.538 CC lib/init/rpc.o 00:50:48.538 CC lib/bdev/bdev.o 00:50:48.538 CC lib/bdev/bdev_rpc.o 00:50:48.538 CC lib/virtio/virtio_pci.o 00:50:48.797 CC lib/blob/zeroes.o 00:50:48.797 CC lib/bdev/bdev_zone.o 00:50:48.797 LIB libspdk_init.a 00:50:48.797 CC lib/blob/blob_bs_dev.o 00:50:48.797 SO libspdk_init.so.6.0 00:50:48.797 LIB libspdk_fsdev.a 00:50:48.797 SYMLINK libspdk_init.so 00:50:48.797 CC lib/bdev/part.o 00:50:48.797 CC lib/bdev/scsi_nvme.o 00:50:48.797 SO libspdk_fsdev.so.2.0 00:50:49.056 LIB libspdk_virtio.a 00:50:49.056 SYMLINK libspdk_fsdev.so 00:50:49.056 SO libspdk_virtio.so.7.0 00:50:49.056 LIB libspdk_nvme.a 00:50:49.056 SYMLINK libspdk_virtio.so 00:50:49.056 CC lib/event/app.o 00:50:49.056 CC lib/event/reactor.o 00:50:49.056 CC lib/event/log_rpc.o 00:50:49.056 CC lib/event/app_rpc.o 00:50:49.056 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:50:49.056 CC lib/event/scheduler_static.o 00:50:49.315 SO libspdk_nvme.so.15.0 00:50:49.574 SYMLINK libspdk_nvme.so 00:50:49.574 LIB libspdk_event.a 00:50:49.574 SO libspdk_event.so.14.0 00:50:49.833 SYMLINK libspdk_event.so 00:50:49.833 LIB libspdk_fuse_dispatcher.a 00:50:49.833 SO libspdk_fuse_dispatcher.so.1.0 00:50:49.833 SYMLINK libspdk_fuse_dispatcher.so 00:50:50.769 LIB libspdk_blob.a 00:50:50.769 SO libspdk_blob.so.12.0 00:50:50.769 SYMLINK libspdk_blob.so 00:50:51.040 CC lib/blobfs/tree.o 00:50:51.040 CC lib/blobfs/blobfs.o 00:50:51.040 CC lib/lvol/lvol.o 00:50:51.040 LIB libspdk_bdev.a 00:50:51.299 SO libspdk_bdev.so.17.0 00:50:51.299 SYMLINK libspdk_bdev.so 00:50:51.557 CC lib/nvmf/ctrlr.o 00:50:51.557 CC lib/ublk/ublk.o 00:50:51.557 CC lib/nvmf/ctrlr_discovery.o 00:50:51.557 CC lib/ublk/ublk_rpc.o 00:50:51.557 CC lib/nvmf/ctrlr_bdev.o 00:50:51.557 CC lib/nbd/nbd.o 00:50:51.557 CC lib/scsi/dev.o 00:50:51.557 CC lib/ftl/ftl_core.o 00:50:51.815 CC lib/ftl/ftl_init.o 00:50:51.815 LIB libspdk_blobfs.a 00:50:51.815 CC lib/scsi/lun.o 00:50:51.815 SO libspdk_blobfs.so.11.0 00:50:51.815 SYMLINK libspdk_blobfs.so 00:50:51.815 CC lib/scsi/port.o 00:50:52.073 CC lib/ftl/ftl_layout.o 00:50:52.073 LIB libspdk_lvol.a 00:50:52.073 CC lib/nbd/nbd_rpc.o 00:50:52.073 SO libspdk_lvol.so.11.0 00:50:52.073 CC lib/nvmf/subsystem.o 00:50:52.073 SYMLINK libspdk_lvol.so 00:50:52.073 CC lib/ftl/ftl_debug.o 00:50:52.073 CC lib/nvmf/nvmf.o 00:50:52.073 CC lib/nvmf/nvmf_rpc.o 00:50:52.073 CC lib/scsi/scsi.o 00:50:52.073 LIB libspdk_nbd.a 00:50:52.332 SO libspdk_nbd.so.7.0 00:50:52.332 LIB libspdk_ublk.a 00:50:52.332 SO libspdk_ublk.so.3.0 00:50:52.332 SYMLINK libspdk_nbd.so 00:50:52.332 CC lib/scsi/scsi_bdev.o 00:50:52.332 CC lib/scsi/scsi_pr.o 00:50:52.332 CC lib/ftl/ftl_io.o 00:50:52.332 CC lib/nvmf/transport.o 00:50:52.332 CC lib/nvmf/tcp.o 00:50:52.332 SYMLINK libspdk_ublk.so 00:50:52.332 CC lib/nvmf/stubs.o 00:50:52.590 CC lib/ftl/ftl_sb.o 00:50:52.590 CC lib/scsi/scsi_rpc.o 00:50:52.848 CC lib/scsi/task.o 00:50:52.848 CC lib/ftl/ftl_l2p.o 00:50:52.848 CC lib/nvmf/mdns_server.o 00:50:52.848 CC lib/nvmf/rdma.o 00:50:52.848 CC lib/nvmf/auth.o 00:50:53.106 LIB libspdk_scsi.a 00:50:53.106 CC lib/ftl/ftl_l2p_flat.o 00:50:53.106 CC lib/ftl/ftl_nv_cache.o 00:50:53.106 CC lib/ftl/ftl_band.o 00:50:53.106 SO libspdk_scsi.so.9.0 00:50:53.106 SYMLINK libspdk_scsi.so 00:50:53.106 CC lib/ftl/ftl_band_ops.o 00:50:53.106 CC lib/ftl/ftl_writer.o 00:50:53.365 CC lib/ftl/ftl_rq.o 00:50:53.365 CC lib/ftl/ftl_reloc.o 00:50:53.365 CC lib/ftl/ftl_l2p_cache.o 00:50:53.365 CC lib/ftl/ftl_p2l.o 00:50:53.365 CC lib/ftl/ftl_p2l_log.o 00:50:53.623 CC lib/ftl/mngt/ftl_mngt.o 00:50:53.623 CC lib/iscsi/conn.o 00:50:53.623 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:50:53.623 CC lib/iscsi/init_grp.o 00:50:53.880 CC lib/iscsi/iscsi.o 00:50:53.880 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:50:53.880 CC lib/ftl/mngt/ftl_mngt_startup.o 00:50:53.880 CC lib/ftl/mngt/ftl_mngt_md.o 00:50:53.880 CC lib/ftl/mngt/ftl_mngt_misc.o 00:50:53.880 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:50:53.880 CC lib/vhost/vhost.o 00:50:53.880 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:50:54.138 CC lib/iscsi/param.o 00:50:54.138 CC lib/iscsi/portal_grp.o 00:50:54.138 CC lib/ftl/mngt/ftl_mngt_band.o 00:50:54.138 CC lib/vhost/vhost_rpc.o 00:50:54.138 CC lib/iscsi/tgt_node.o 00:50:54.138 CC lib/iscsi/iscsi_subsystem.o 00:50:54.396 CC lib/iscsi/iscsi_rpc.o 00:50:54.396 CC lib/vhost/vhost_scsi.o 00:50:54.396 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:50:54.396 CC lib/iscsi/task.o 00:50:54.654 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:50:54.654 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:50:54.654 CC lib/vhost/vhost_blk.o 00:50:54.654 CC lib/vhost/rte_vhost_user.o 00:50:54.654 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:50:54.912 LIB libspdk_nvmf.a 00:50:54.912 CC lib/ftl/utils/ftl_conf.o 00:50:54.912 CC lib/ftl/utils/ftl_md.o 00:50:54.912 SO libspdk_nvmf.so.20.0 00:50:54.912 CC lib/ftl/utils/ftl_mempool.o 00:50:55.170 CC lib/ftl/utils/ftl_bitmap.o 00:50:55.170 CC lib/ftl/utils/ftl_property.o 00:50:55.170 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:50:55.170 SYMLINK libspdk_nvmf.so 00:50:55.170 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:50:55.170 LIB libspdk_iscsi.a 00:50:55.170 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:50:55.170 SO libspdk_iscsi.so.8.0 00:50:55.170 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:50:55.428 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:50:55.428 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:50:55.428 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:50:55.428 CC lib/ftl/upgrade/ftl_sb_v3.o 00:50:55.428 CC lib/ftl/upgrade/ftl_sb_v5.o 00:50:55.428 SYMLINK libspdk_iscsi.so 00:50:55.428 CC lib/ftl/nvc/ftl_nvc_dev.o 00:50:55.428 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:50:55.428 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:50:55.428 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:50:55.428 CC lib/ftl/base/ftl_base_dev.o 00:50:55.685 CC lib/ftl/base/ftl_base_bdev.o 00:50:55.685 CC lib/ftl/ftl_trace.o 00:50:55.685 LIB libspdk_vhost.a 00:50:55.946 LIB libspdk_ftl.a 00:50:55.946 SO libspdk_vhost.so.8.0 00:50:55.946 SYMLINK libspdk_vhost.so 00:50:55.946 SO libspdk_ftl.so.9.0 00:50:56.221 SYMLINK libspdk_ftl.so 00:50:56.502 CC module/env_dpdk/env_dpdk_rpc.o 00:50:56.784 CC module/blob/bdev/blob_bdev.o 00:50:56.784 CC module/fsdev/aio/fsdev_aio.o 00:50:56.784 CC module/keyring/linux/keyring.o 00:50:56.784 CC module/accel/ioat/accel_ioat.o 00:50:56.784 CC module/scheduler/dynamic/scheduler_dynamic.o 00:50:56.784 CC module/accel/error/accel_error.o 00:50:56.784 CC module/accel/dsa/accel_dsa.o 00:50:56.784 CC module/keyring/file/keyring.o 00:50:56.784 CC module/sock/posix/posix.o 00:50:56.784 LIB libspdk_env_dpdk_rpc.a 00:50:56.784 SO libspdk_env_dpdk_rpc.so.6.0 00:50:56.784 SYMLINK libspdk_env_dpdk_rpc.so 00:50:56.784 CC module/accel/ioat/accel_ioat_rpc.o 00:50:56.784 CC module/keyring/linux/keyring_rpc.o 00:50:56.784 CC module/keyring/file/keyring_rpc.o 00:50:56.784 CC module/fsdev/aio/fsdev_aio_rpc.o 00:50:56.784 CC module/accel/error/accel_error_rpc.o 00:50:56.784 LIB libspdk_scheduler_dynamic.a 00:50:57.042 SO libspdk_scheduler_dynamic.so.4.0 00:50:57.042 LIB libspdk_blob_bdev.a 00:50:57.042 LIB libspdk_accel_ioat.a 00:50:57.042 LIB libspdk_keyring_linux.a 00:50:57.042 SO libspdk_blob_bdev.so.12.0 00:50:57.042 CC module/accel/dsa/accel_dsa_rpc.o 00:50:57.042 SYMLINK libspdk_scheduler_dynamic.so 00:50:57.042 LIB libspdk_keyring_file.a 00:50:57.042 SO libspdk_keyring_linux.so.1.0 00:50:57.042 SO libspdk_accel_ioat.so.6.0 00:50:57.042 SO libspdk_keyring_file.so.2.0 00:50:57.042 CC module/fsdev/aio/linux_aio_mgr.o 00:50:57.042 SYMLINK libspdk_blob_bdev.so 00:50:57.042 LIB libspdk_accel_error.a 00:50:57.042 SYMLINK libspdk_keyring_linux.so 00:50:57.042 SYMLINK libspdk_accel_ioat.so 00:50:57.042 SO libspdk_accel_error.so.2.0 00:50:57.042 SYMLINK libspdk_keyring_file.so 00:50:57.042 LIB libspdk_accel_dsa.a 00:50:57.042 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:50:57.042 SO libspdk_accel_dsa.so.5.0 00:50:57.301 SYMLINK libspdk_accel_error.so 00:50:57.301 SYMLINK libspdk_accel_dsa.so 00:50:57.301 CC module/scheduler/gscheduler/gscheduler.o 00:50:57.301 CC module/accel/iaa/accel_iaa.o 00:50:57.301 LIB libspdk_scheduler_dpdk_governor.a 00:50:57.301 LIB libspdk_fsdev_aio.a 00:50:57.301 CC module/bdev/delay/vbdev_delay.o 00:50:57.301 CC module/blobfs/bdev/blobfs_bdev.o 00:50:57.301 CC module/bdev/error/vbdev_error.o 00:50:57.301 SO libspdk_scheduler_dpdk_governor.so.4.0 00:50:57.301 SO libspdk_fsdev_aio.so.1.0 00:50:57.560 CC module/bdev/gpt/gpt.o 00:50:57.560 SYMLINK libspdk_scheduler_dpdk_governor.so 00:50:57.560 LIB libspdk_scheduler_gscheduler.a 00:50:57.560 CC module/bdev/error/vbdev_error_rpc.o 00:50:57.560 CC module/bdev/lvol/vbdev_lvol.o 00:50:57.560 LIB libspdk_sock_posix.a 00:50:57.560 SYMLINK libspdk_fsdev_aio.so 00:50:57.560 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:50:57.560 SO libspdk_scheduler_gscheduler.so.4.0 00:50:57.560 SO libspdk_sock_posix.so.6.0 00:50:57.560 CC module/accel/iaa/accel_iaa_rpc.o 00:50:57.560 SYMLINK libspdk_scheduler_gscheduler.so 00:50:57.560 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:50:57.560 SYMLINK libspdk_sock_posix.so 00:50:57.560 CC module/bdev/delay/vbdev_delay_rpc.o 00:50:57.560 CC module/bdev/gpt/vbdev_gpt.o 00:50:57.560 LIB libspdk_bdev_error.a 00:50:57.560 LIB libspdk_accel_iaa.a 00:50:57.819 SO libspdk_accel_iaa.so.3.0 00:50:57.819 SO libspdk_bdev_error.so.6.0 00:50:57.819 CC module/bdev/malloc/bdev_malloc.o 00:50:57.819 LIB libspdk_blobfs_bdev.a 00:50:57.819 CC module/bdev/malloc/bdev_malloc_rpc.o 00:50:57.819 LIB libspdk_bdev_delay.a 00:50:57.819 SYMLINK libspdk_accel_iaa.so 00:50:57.819 SO libspdk_blobfs_bdev.so.6.0 00:50:57.819 SYMLINK libspdk_bdev_error.so 00:50:57.819 SO libspdk_bdev_delay.so.6.0 00:50:57.819 CC module/bdev/null/bdev_null.o 00:50:57.819 SYMLINK libspdk_blobfs_bdev.so 00:50:57.819 CC module/bdev/null/bdev_null_rpc.o 00:50:57.819 SYMLINK libspdk_bdev_delay.so 00:50:57.819 LIB libspdk_bdev_gpt.a 00:50:58.078 SO libspdk_bdev_gpt.so.6.0 00:50:58.078 CC module/bdev/nvme/bdev_nvme.o 00:50:58.078 CC module/bdev/passthru/vbdev_passthru.o 00:50:58.078 LIB libspdk_bdev_lvol.a 00:50:58.078 SO libspdk_bdev_lvol.so.6.0 00:50:58.078 CC module/bdev/split/vbdev_split.o 00:50:58.078 SYMLINK libspdk_bdev_gpt.so 00:50:58.078 CC module/bdev/raid/bdev_raid.o 00:50:58.078 CC module/bdev/split/vbdev_split_rpc.o 00:50:58.078 CC module/bdev/raid/bdev_raid_rpc.o 00:50:58.078 SYMLINK libspdk_bdev_lvol.so 00:50:58.078 CC module/bdev/raid/bdev_raid_sb.o 00:50:58.078 LIB libspdk_bdev_null.a 00:50:58.078 CC module/bdev/zone_block/vbdev_zone_block.o 00:50:58.078 LIB libspdk_bdev_malloc.a 00:50:58.078 SO libspdk_bdev_null.so.6.0 00:50:58.078 SO libspdk_bdev_malloc.so.6.0 00:50:58.078 SYMLINK libspdk_bdev_null.so 00:50:58.078 CC module/bdev/raid/raid0.o 00:50:58.338 SYMLINK libspdk_bdev_malloc.so 00:50:58.338 CC module/bdev/raid/raid1.o 00:50:58.338 CC module/bdev/raid/concat.o 00:50:58.338 LIB libspdk_bdev_split.a 00:50:58.338 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:50:58.338 CC module/bdev/nvme/bdev_nvme_rpc.o 00:50:58.338 SO libspdk_bdev_split.so.6.0 00:50:58.338 SYMLINK libspdk_bdev_split.so 00:50:58.339 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:50:58.339 LIB libspdk_bdev_passthru.a 00:50:58.339 CC module/bdev/nvme/nvme_rpc.o 00:50:58.598 SO libspdk_bdev_passthru.so.6.0 00:50:58.598 CC module/bdev/nvme/bdev_mdns_client.o 00:50:58.598 CC module/bdev/aio/bdev_aio.o 00:50:58.598 LIB libspdk_bdev_zone_block.a 00:50:58.598 SYMLINK libspdk_bdev_passthru.so 00:50:58.598 SO libspdk_bdev_zone_block.so.6.0 00:50:58.598 CC module/bdev/ftl/bdev_ftl.o 00:50:58.598 CC module/bdev/iscsi/bdev_iscsi.o 00:50:58.598 SYMLINK libspdk_bdev_zone_block.so 00:50:58.598 CC module/bdev/nvme/vbdev_opal.o 00:50:58.857 CC module/bdev/nvme/vbdev_opal_rpc.o 00:50:58.857 CC module/bdev/virtio/bdev_virtio_scsi.o 00:50:58.857 CC module/bdev/virtio/bdev_virtio_blk.o 00:50:58.857 CC module/bdev/aio/bdev_aio_rpc.o 00:50:58.857 CC module/bdev/virtio/bdev_virtio_rpc.o 00:50:58.857 CC module/bdev/ftl/bdev_ftl_rpc.o 00:50:58.857 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:50:58.857 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:50:59.116 LIB libspdk_bdev_aio.a 00:50:59.116 LIB libspdk_bdev_raid.a 00:50:59.116 SO libspdk_bdev_aio.so.6.0 00:50:59.116 LIB libspdk_bdev_iscsi.a 00:50:59.116 SO libspdk_bdev_raid.so.6.0 00:50:59.116 SYMLINK libspdk_bdev_aio.so 00:50:59.116 LIB libspdk_bdev_ftl.a 00:50:59.116 SO libspdk_bdev_iscsi.so.6.0 00:50:59.116 SO libspdk_bdev_ftl.so.6.0 00:50:59.116 SYMLINK libspdk_bdev_raid.so 00:50:59.116 SYMLINK libspdk_bdev_iscsi.so 00:50:59.375 SYMLINK libspdk_bdev_ftl.so 00:50:59.375 LIB libspdk_bdev_virtio.a 00:50:59.375 SO libspdk_bdev_virtio.so.6.0 00:50:59.375 SYMLINK libspdk_bdev_virtio.so 00:51:00.314 LIB libspdk_bdev_nvme.a 00:51:00.314 SO libspdk_bdev_nvme.so.7.1 00:51:00.573 SYMLINK libspdk_bdev_nvme.so 00:51:00.832 CC module/event/subsystems/sock/sock.o 00:51:00.832 CC module/event/subsystems/scheduler/scheduler.o 00:51:00.832 CC module/event/subsystems/fsdev/fsdev.o 00:51:00.832 CC module/event/subsystems/vmd/vmd.o 00:51:00.832 CC module/event/subsystems/iobuf/iobuf.o 00:51:00.832 CC module/event/subsystems/vmd/vmd_rpc.o 00:51:00.832 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:51:00.832 CC module/event/subsystems/keyring/keyring.o 00:51:01.090 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:51:01.090 LIB libspdk_event_fsdev.a 00:51:01.090 LIB libspdk_event_keyring.a 00:51:01.090 LIB libspdk_event_sock.a 00:51:01.090 LIB libspdk_event_vmd.a 00:51:01.090 LIB libspdk_event_vhost_blk.a 00:51:01.090 SO libspdk_event_fsdev.so.1.0 00:51:01.090 LIB libspdk_event_scheduler.a 00:51:01.090 LIB libspdk_event_iobuf.a 00:51:01.090 SO libspdk_event_keyring.so.1.0 00:51:01.090 SO libspdk_event_sock.so.5.0 00:51:01.090 SO libspdk_event_vhost_blk.so.3.0 00:51:01.090 SO libspdk_event_scheduler.so.4.0 00:51:01.090 SO libspdk_event_vmd.so.6.0 00:51:01.090 SO libspdk_event_iobuf.so.3.0 00:51:01.090 SYMLINK libspdk_event_fsdev.so 00:51:01.090 SYMLINK libspdk_event_vhost_blk.so 00:51:01.090 SYMLINK libspdk_event_keyring.so 00:51:01.348 SYMLINK libspdk_event_sock.so 00:51:01.348 SYMLINK libspdk_event_scheduler.so 00:51:01.348 SYMLINK libspdk_event_vmd.so 00:51:01.348 SYMLINK libspdk_event_iobuf.so 00:51:01.607 CC module/event/subsystems/accel/accel.o 00:51:01.607 LIB libspdk_event_accel.a 00:51:01.607 SO libspdk_event_accel.so.6.0 00:51:01.607 SYMLINK libspdk_event_accel.so 00:51:02.175 CC module/event/subsystems/bdev/bdev.o 00:51:02.175 LIB libspdk_event_bdev.a 00:51:02.175 SO libspdk_event_bdev.so.6.0 00:51:02.434 SYMLINK libspdk_event_bdev.so 00:51:02.434 CC module/event/subsystems/scsi/scsi.o 00:51:02.434 CC module/event/subsystems/nbd/nbd.o 00:51:02.434 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:51:02.434 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:51:02.434 CC module/event/subsystems/ublk/ublk.o 00:51:02.693 LIB libspdk_event_nbd.a 00:51:02.693 LIB libspdk_event_ublk.a 00:51:02.693 LIB libspdk_event_scsi.a 00:51:02.693 SO libspdk_event_nbd.so.6.0 00:51:02.693 SO libspdk_event_ublk.so.3.0 00:51:02.693 SO libspdk_event_scsi.so.6.0 00:51:02.693 SYMLINK libspdk_event_ublk.so 00:51:02.693 SYMLINK libspdk_event_nbd.so 00:51:02.693 SYMLINK libspdk_event_scsi.so 00:51:02.693 LIB libspdk_event_nvmf.a 00:51:02.953 SO libspdk_event_nvmf.so.6.0 00:51:02.953 SYMLINK libspdk_event_nvmf.so 00:51:02.953 CC module/event/subsystems/iscsi/iscsi.o 00:51:02.953 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:51:03.212 LIB libspdk_event_vhost_scsi.a 00:51:03.212 LIB libspdk_event_iscsi.a 00:51:03.212 SO libspdk_event_vhost_scsi.so.3.0 00:51:03.212 SO libspdk_event_iscsi.so.6.0 00:51:03.212 SYMLINK libspdk_event_vhost_scsi.so 00:51:03.212 SYMLINK libspdk_event_iscsi.so 00:51:03.472 SO libspdk.so.6.0 00:51:03.472 SYMLINK libspdk.so 00:51:03.730 CXX app/trace/trace.o 00:51:03.730 TEST_HEADER include/spdk/accel.h 00:51:03.730 TEST_HEADER include/spdk/accel_module.h 00:51:03.730 TEST_HEADER include/spdk/assert.h 00:51:03.730 TEST_HEADER include/spdk/barrier.h 00:51:03.730 TEST_HEADER include/spdk/base64.h 00:51:03.730 TEST_HEADER include/spdk/bdev.h 00:51:03.730 TEST_HEADER include/spdk/bdev_module.h 00:51:03.730 TEST_HEADER include/spdk/bdev_zone.h 00:51:03.730 CC examples/interrupt_tgt/interrupt_tgt.o 00:51:03.730 TEST_HEADER include/spdk/bit_array.h 00:51:03.731 TEST_HEADER include/spdk/bit_pool.h 00:51:03.731 TEST_HEADER include/spdk/blob_bdev.h 00:51:03.731 TEST_HEADER include/spdk/blobfs_bdev.h 00:51:03.731 TEST_HEADER include/spdk/blobfs.h 00:51:03.731 TEST_HEADER include/spdk/blob.h 00:51:03.731 TEST_HEADER include/spdk/conf.h 00:51:03.731 TEST_HEADER include/spdk/config.h 00:51:03.731 TEST_HEADER include/spdk/cpuset.h 00:51:03.731 TEST_HEADER include/spdk/crc16.h 00:51:03.731 TEST_HEADER include/spdk/crc32.h 00:51:03.731 TEST_HEADER include/spdk/crc64.h 00:51:03.731 TEST_HEADER include/spdk/dif.h 00:51:03.731 TEST_HEADER include/spdk/dma.h 00:51:03.731 TEST_HEADER include/spdk/endian.h 00:51:03.731 TEST_HEADER include/spdk/env_dpdk.h 00:51:03.731 TEST_HEADER include/spdk/env.h 00:51:03.731 TEST_HEADER include/spdk/event.h 00:51:03.731 TEST_HEADER include/spdk/fd_group.h 00:51:03.731 TEST_HEADER include/spdk/fd.h 00:51:03.731 TEST_HEADER include/spdk/file.h 00:51:03.731 TEST_HEADER include/spdk/fsdev.h 00:51:03.731 CC examples/ioat/perf/perf.o 00:51:03.731 CC examples/util/zipf/zipf.o 00:51:03.731 TEST_HEADER include/spdk/fsdev_module.h 00:51:03.731 TEST_HEADER include/spdk/ftl.h 00:51:03.731 TEST_HEADER include/spdk/fuse_dispatcher.h 00:51:03.731 TEST_HEADER include/spdk/gpt_spec.h 00:51:03.731 CC test/thread/poller_perf/poller_perf.o 00:51:03.731 TEST_HEADER include/spdk/hexlify.h 00:51:03.731 TEST_HEADER include/spdk/histogram_data.h 00:51:03.731 TEST_HEADER include/spdk/idxd.h 00:51:03.731 TEST_HEADER include/spdk/idxd_spec.h 00:51:03.990 TEST_HEADER include/spdk/init.h 00:51:03.990 TEST_HEADER include/spdk/ioat.h 00:51:03.990 TEST_HEADER include/spdk/ioat_spec.h 00:51:03.990 TEST_HEADER include/spdk/iscsi_spec.h 00:51:03.990 TEST_HEADER include/spdk/json.h 00:51:03.990 TEST_HEADER include/spdk/jsonrpc.h 00:51:03.990 TEST_HEADER include/spdk/keyring.h 00:51:03.990 TEST_HEADER include/spdk/keyring_module.h 00:51:03.990 TEST_HEADER include/spdk/likely.h 00:51:03.990 CC test/app/bdev_svc/bdev_svc.o 00:51:03.990 TEST_HEADER include/spdk/log.h 00:51:03.990 CC test/dma/test_dma/test_dma.o 00:51:03.990 TEST_HEADER include/spdk/lvol.h 00:51:03.990 TEST_HEADER include/spdk/md5.h 00:51:03.990 TEST_HEADER include/spdk/memory.h 00:51:03.990 TEST_HEADER include/spdk/mmio.h 00:51:03.990 TEST_HEADER include/spdk/nbd.h 00:51:03.990 TEST_HEADER include/spdk/net.h 00:51:03.990 TEST_HEADER include/spdk/notify.h 00:51:03.990 TEST_HEADER include/spdk/nvme.h 00:51:03.990 TEST_HEADER include/spdk/nvme_intel.h 00:51:03.990 TEST_HEADER include/spdk/nvme_ocssd.h 00:51:03.990 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:51:03.990 TEST_HEADER include/spdk/nvme_spec.h 00:51:03.990 TEST_HEADER include/spdk/nvme_zns.h 00:51:03.990 TEST_HEADER include/spdk/nvmf_cmd.h 00:51:03.990 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:51:03.990 TEST_HEADER include/spdk/nvmf.h 00:51:03.990 TEST_HEADER include/spdk/nvmf_spec.h 00:51:03.990 TEST_HEADER include/spdk/nvmf_transport.h 00:51:03.990 TEST_HEADER include/spdk/opal.h 00:51:03.990 TEST_HEADER include/spdk/opal_spec.h 00:51:03.990 TEST_HEADER include/spdk/pci_ids.h 00:51:03.990 CC test/env/mem_callbacks/mem_callbacks.o 00:51:03.990 TEST_HEADER include/spdk/pipe.h 00:51:03.990 TEST_HEADER include/spdk/queue.h 00:51:03.990 TEST_HEADER include/spdk/reduce.h 00:51:03.990 TEST_HEADER include/spdk/rpc.h 00:51:03.990 TEST_HEADER include/spdk/scheduler.h 00:51:03.990 TEST_HEADER include/spdk/scsi.h 00:51:03.990 TEST_HEADER include/spdk/scsi_spec.h 00:51:03.990 TEST_HEADER include/spdk/sock.h 00:51:03.990 TEST_HEADER include/spdk/stdinc.h 00:51:03.990 TEST_HEADER include/spdk/string.h 00:51:03.990 TEST_HEADER include/spdk/thread.h 00:51:03.990 TEST_HEADER include/spdk/trace.h 00:51:03.990 TEST_HEADER include/spdk/trace_parser.h 00:51:03.990 TEST_HEADER include/spdk/tree.h 00:51:03.990 TEST_HEADER include/spdk/ublk.h 00:51:03.990 TEST_HEADER include/spdk/util.h 00:51:03.990 TEST_HEADER include/spdk/uuid.h 00:51:03.990 TEST_HEADER include/spdk/version.h 00:51:03.990 TEST_HEADER include/spdk/vfio_user_pci.h 00:51:03.990 TEST_HEADER include/spdk/vfio_user_spec.h 00:51:03.990 LINK interrupt_tgt 00:51:03.990 TEST_HEADER include/spdk/vhost.h 00:51:03.990 TEST_HEADER include/spdk/vmd.h 00:51:03.990 TEST_HEADER include/spdk/xor.h 00:51:03.990 TEST_HEADER include/spdk/zipf.h 00:51:03.990 CXX test/cpp_headers/accel.o 00:51:03.990 LINK poller_perf 00:51:03.990 LINK zipf 00:51:04.250 LINK ioat_perf 00:51:04.250 LINK bdev_svc 00:51:04.250 LINK spdk_trace 00:51:04.250 CXX test/cpp_headers/accel_module.o 00:51:04.250 CXX test/cpp_headers/assert.o 00:51:04.250 CC examples/ioat/verify/verify.o 00:51:04.250 CC test/env/vtophys/vtophys.o 00:51:04.509 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:51:04.509 CXX test/cpp_headers/barrier.o 00:51:04.509 LINK test_dma 00:51:04.509 CC app/trace_record/trace_record.o 00:51:04.509 LINK vtophys 00:51:04.509 CC test/app/histogram_perf/histogram_perf.o 00:51:04.509 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:51:04.509 LINK env_dpdk_post_init 00:51:04.509 LINK verify 00:51:04.768 LINK mem_callbacks 00:51:04.768 CXX test/cpp_headers/base64.o 00:51:04.768 LINK histogram_perf 00:51:04.768 CC test/env/memory/memory_ut.o 00:51:04.768 LINK spdk_trace_record 00:51:04.768 CXX test/cpp_headers/bdev.o 00:51:04.768 CC test/env/pci/pci_ut.o 00:51:04.768 CXX test/cpp_headers/bdev_module.o 00:51:04.768 CC test/app/jsoncat/jsoncat.o 00:51:04.768 CC test/app/stub/stub.o 00:51:05.027 CC examples/thread/thread/thread_ex.o 00:51:05.027 LINK nvme_fuzz 00:51:05.027 LINK jsoncat 00:51:05.027 CC app/nvmf_tgt/nvmf_main.o 00:51:05.027 LINK stub 00:51:05.027 CXX test/cpp_headers/bdev_zone.o 00:51:05.286 CXX test/cpp_headers/bit_array.o 00:51:05.286 CC app/iscsi_tgt/iscsi_tgt.o 00:51:05.286 CXX test/cpp_headers/bit_pool.o 00:51:05.286 LINK thread 00:51:05.286 LINK nvmf_tgt 00:51:05.286 LINK pci_ut 00:51:05.286 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:51:05.544 CXX test/cpp_headers/blob_bdev.o 00:51:05.544 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:51:05.544 LINK iscsi_tgt 00:51:05.544 CXX test/cpp_headers/blobfs_bdev.o 00:51:05.544 CC app/spdk_tgt/spdk_tgt.o 00:51:05.544 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:51:05.544 CC test/rpc_client/rpc_client_test.o 00:51:05.803 CXX test/cpp_headers/blobfs.o 00:51:05.803 CC examples/sock/hello_world/hello_sock.o 00:51:05.803 LINK spdk_tgt 00:51:05.803 LINK rpc_client_test 00:51:05.803 CC examples/vmd/lsvmd/lsvmd.o 00:51:05.803 CC examples/idxd/perf/perf.o 00:51:05.803 CXX test/cpp_headers/blob.o 00:51:06.083 CXX test/cpp_headers/conf.o 00:51:06.083 LINK lsvmd 00:51:06.083 LINK hello_sock 00:51:06.083 LINK vhost_fuzz 00:51:06.083 LINK memory_ut 00:51:06.083 CC app/spdk_lspci/spdk_lspci.o 00:51:06.083 CXX test/cpp_headers/config.o 00:51:06.083 CXX test/cpp_headers/cpuset.o 00:51:06.083 CC app/spdk_nvme_perf/perf.o 00:51:06.341 CXX test/cpp_headers/crc16.o 00:51:06.341 LINK idxd_perf 00:51:06.341 CC examples/vmd/led/led.o 00:51:06.341 CC app/spdk_nvme_identify/identify.o 00:51:06.341 LINK spdk_lspci 00:51:06.341 CXX test/cpp_headers/crc32.o 00:51:06.600 LINK led 00:51:06.600 CC app/spdk_nvme_discover/discovery_aer.o 00:51:06.600 CC app/spdk_top/spdk_top.o 00:51:06.600 CC test/accel/dif/dif.o 00:51:06.858 CXX test/cpp_headers/crc64.o 00:51:06.858 LINK spdk_nvme_discover 00:51:06.858 CC examples/fsdev/hello_world/hello_fsdev.o 00:51:07.116 CXX test/cpp_headers/dif.o 00:51:07.116 CC test/blobfs/mkfs/mkfs.o 00:51:07.116 CXX test/cpp_headers/dma.o 00:51:07.116 LINK spdk_nvme_perf 00:51:07.116 LINK iscsi_fuzz 00:51:07.116 LINK mkfs 00:51:07.116 LINK hello_fsdev 00:51:07.116 CXX test/cpp_headers/endian.o 00:51:07.374 CXX test/cpp_headers/env_dpdk.o 00:51:07.374 LINK spdk_nvme_identify 00:51:07.374 LINK dif 00:51:07.374 CC test/event/event_perf/event_perf.o 00:51:07.374 CXX test/cpp_headers/env.o 00:51:07.374 CC test/event/reactor/reactor.o 00:51:07.630 LINK event_perf 00:51:07.630 CC test/event/reactor_perf/reactor_perf.o 00:51:07.630 CC test/event/app_repeat/app_repeat.o 00:51:07.630 CC test/event/scheduler/scheduler.o 00:51:07.630 LINK spdk_top 00:51:07.631 CC app/vhost/vhost.o 00:51:07.631 CXX test/cpp_headers/event.o 00:51:07.631 LINK reactor 00:51:07.888 LINK reactor_perf 00:51:07.888 LINK app_repeat 00:51:07.888 CC examples/accel/perf/accel_perf.o 00:51:08.145 LINK scheduler 00:51:08.145 CXX test/cpp_headers/fd_group.o 00:51:08.145 LINK vhost 00:51:08.145 CC app/spdk_dd/spdk_dd.o 00:51:08.403 CC examples/nvme/hello_world/hello_world.o 00:51:08.403 CC examples/blob/hello_world/hello_blob.o 00:51:08.403 CC examples/blob/cli/blobcli.o 00:51:08.403 CXX test/cpp_headers/fd.o 00:51:08.403 CC app/fio/nvme/fio_plugin.o 00:51:08.661 CXX test/cpp_headers/file.o 00:51:08.661 LINK hello_world 00:51:08.661 CC test/nvme/aer/aer.o 00:51:08.661 CC test/lvol/esnap/esnap.o 00:51:08.661 LINK hello_blob 00:51:08.661 LINK spdk_dd 00:51:08.661 LINK accel_perf 00:51:08.661 CXX test/cpp_headers/fsdev.o 00:51:08.918 CC examples/nvme/reconnect/reconnect.o 00:51:08.918 LINK blobcli 00:51:08.918 CXX test/cpp_headers/fsdev_module.o 00:51:08.918 LINK aer 00:51:08.918 LINK spdk_nvme 00:51:09.175 CXX test/cpp_headers/ftl.o 00:51:09.175 CC app/fio/bdev/fio_plugin.o 00:51:09.175 CXX test/cpp_headers/fuse_dispatcher.o 00:51:09.175 CC test/nvme/reset/reset.o 00:51:09.175 CC examples/bdev/hello_world/hello_bdev.o 00:51:09.175 CC test/bdev/bdevio/bdevio.o 00:51:09.175 CC test/nvme/sgl/sgl.o 00:51:09.175 LINK reconnect 00:51:09.433 CXX test/cpp_headers/gpt_spec.o 00:51:09.433 CC test/nvme/e2edp/nvme_dp.o 00:51:09.433 LINK hello_bdev 00:51:09.433 LINK reset 00:51:09.691 CC examples/nvme/nvme_manage/nvme_manage.o 00:51:09.691 CXX test/cpp_headers/hexlify.o 00:51:09.691 LINK sgl 00:51:09.691 LINK bdevio 00:51:09.691 CXX test/cpp_headers/histogram_data.o 00:51:09.691 LINK spdk_bdev 00:51:09.691 LINK nvme_dp 00:51:09.949 CC examples/bdev/bdevperf/bdevperf.o 00:51:09.949 CC test/nvme/overhead/overhead.o 00:51:09.949 CC test/nvme/err_injection/err_injection.o 00:51:09.949 CXX test/cpp_headers/idxd.o 00:51:09.949 CXX test/cpp_headers/idxd_spec.o 00:51:09.949 CC test/nvme/startup/startup.o 00:51:09.949 CC test/nvme/reserve/reserve.o 00:51:09.949 LINK err_injection 00:51:09.949 CXX test/cpp_headers/init.o 00:51:10.207 LINK startup 00:51:10.207 LINK nvme_manage 00:51:10.207 LINK overhead 00:51:10.207 CC test/nvme/simple_copy/simple_copy.o 00:51:10.207 CXX test/cpp_headers/ioat.o 00:51:10.207 LINK reserve 00:51:10.207 CXX test/cpp_headers/ioat_spec.o 00:51:10.464 CC test/nvme/connect_stress/connect_stress.o 00:51:10.464 CC test/nvme/boot_partition/boot_partition.o 00:51:10.464 CC examples/nvme/arbitration/arbitration.o 00:51:10.464 CXX test/cpp_headers/iscsi_spec.o 00:51:10.464 LINK simple_copy 00:51:10.464 CC test/nvme/compliance/nvme_compliance.o 00:51:10.464 LINK boot_partition 00:51:10.721 CXX test/cpp_headers/json.o 00:51:10.721 CC examples/nvme/hotplug/hotplug.o 00:51:10.721 LINK connect_stress 00:51:10.721 LINK bdevperf 00:51:10.721 LINK arbitration 00:51:10.721 CC test/nvme/fused_ordering/fused_ordering.o 00:51:10.721 CXX test/cpp_headers/jsonrpc.o 00:51:10.721 CC test/nvme/doorbell_aers/doorbell_aers.o 00:51:10.978 LINK hotplug 00:51:10.978 LINK nvme_compliance 00:51:10.978 CC test/nvme/fdp/fdp.o 00:51:10.978 CXX test/cpp_headers/keyring.o 00:51:10.978 CXX test/cpp_headers/keyring_module.o 00:51:10.978 LINK fused_ordering 00:51:10.978 LINK doorbell_aers 00:51:10.978 CXX test/cpp_headers/likely.o 00:51:10.978 CC test/nvme/cuse/cuse.o 00:51:11.235 CXX test/cpp_headers/log.o 00:51:11.235 CC examples/nvme/cmb_copy/cmb_copy.o 00:51:11.235 CC examples/nvme/abort/abort.o 00:51:11.235 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:51:11.235 CXX test/cpp_headers/lvol.o 00:51:11.235 CXX test/cpp_headers/md5.o 00:51:11.235 LINK fdp 00:51:11.235 CXX test/cpp_headers/memory.o 00:51:11.235 CXX test/cpp_headers/mmio.o 00:51:11.235 LINK cmb_copy 00:51:11.235 CXX test/cpp_headers/nbd.o 00:51:11.235 LINK pmr_persistence 00:51:11.494 CXX test/cpp_headers/net.o 00:51:11.494 CXX test/cpp_headers/notify.o 00:51:11.494 CXX test/cpp_headers/nvme_intel.o 00:51:11.494 CXX test/cpp_headers/nvme.o 00:51:11.494 CXX test/cpp_headers/nvme_ocssd.o 00:51:11.494 CXX test/cpp_headers/nvme_ocssd_spec.o 00:51:11.494 CXX test/cpp_headers/nvme_spec.o 00:51:11.494 LINK abort 00:51:11.494 CXX test/cpp_headers/nvme_zns.o 00:51:11.494 CXX test/cpp_headers/nvmf_cmd.o 00:51:11.753 CXX test/cpp_headers/nvmf_fc_spec.o 00:51:11.753 CXX test/cpp_headers/nvmf.o 00:51:11.753 CXX test/cpp_headers/nvmf_spec.o 00:51:11.753 CXX test/cpp_headers/nvmf_transport.o 00:51:11.753 CXX test/cpp_headers/opal.o 00:51:11.753 CXX test/cpp_headers/opal_spec.o 00:51:11.753 CXX test/cpp_headers/pci_ids.o 00:51:11.753 CXX test/cpp_headers/pipe.o 00:51:12.012 CC examples/nvmf/nvmf/nvmf.o 00:51:12.012 CXX test/cpp_headers/queue.o 00:51:12.012 CXX test/cpp_headers/reduce.o 00:51:12.012 CXX test/cpp_headers/rpc.o 00:51:12.012 CXX test/cpp_headers/scheduler.o 00:51:12.012 CXX test/cpp_headers/scsi.o 00:51:12.012 CXX test/cpp_headers/scsi_spec.o 00:51:12.012 CXX test/cpp_headers/sock.o 00:51:12.012 CXX test/cpp_headers/stdinc.o 00:51:12.012 CXX test/cpp_headers/string.o 00:51:12.012 CXX test/cpp_headers/thread.o 00:51:12.012 CXX test/cpp_headers/trace.o 00:51:12.271 CXX test/cpp_headers/trace_parser.o 00:51:12.271 CXX test/cpp_headers/tree.o 00:51:12.271 CXX test/cpp_headers/ublk.o 00:51:12.271 CXX test/cpp_headers/util.o 00:51:12.271 LINK nvmf 00:51:12.271 CXX test/cpp_headers/uuid.o 00:51:12.271 CXX test/cpp_headers/version.o 00:51:12.271 CXX test/cpp_headers/vfio_user_pci.o 00:51:12.271 CXX test/cpp_headers/vfio_user_spec.o 00:51:12.271 CXX test/cpp_headers/vhost.o 00:51:12.271 CXX test/cpp_headers/vmd.o 00:51:12.271 CXX test/cpp_headers/xor.o 00:51:12.271 CXX test/cpp_headers/zipf.o 00:51:12.531 LINK cuse 00:51:13.909 LINK esnap 00:51:14.168 00:51:14.168 real 1m27.764s 00:51:14.168 user 8m30.468s 00:51:14.168 sys 1m35.132s 00:51:14.168 05:50:08 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:51:14.168 05:50:08 make -- common/autotest_common.sh@10 -- $ set +x 00:51:14.168 ************************************ 00:51:14.168 END TEST make 00:51:14.168 ************************************ 00:51:14.168 05:50:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:51:14.168 05:50:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:51:14.168 05:50:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:51:14.168 05:50:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:14.168 05:50:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:51:14.168 05:50:08 -- pm/common@44 -- $ pid=5302 00:51:14.168 05:50:08 -- pm/common@50 -- $ kill -TERM 5302 00:51:14.168 05:50:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:51:14.168 05:50:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:51:14.168 05:50:08 -- pm/common@44 -- $ pid=5304 00:51:14.168 05:50:08 -- pm/common@50 -- $ kill -TERM 5304 00:51:14.168 05:50:08 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:51:14.168 05:50:08 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:51:14.168 05:50:08 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:14.168 05:50:08 -- common/autotest_common.sh@1711 -- # lcov --version 00:51:14.168 05:50:08 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:14.168 05:50:08 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:14.168 05:50:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:14.168 05:50:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:14.168 05:50:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:14.168 05:50:08 -- scripts/common.sh@336 -- # IFS=.-: 00:51:14.168 05:50:08 -- scripts/common.sh@336 -- # read -ra ver1 00:51:14.168 05:50:08 -- scripts/common.sh@337 -- # IFS=.-: 00:51:14.168 05:50:08 -- scripts/common.sh@337 -- # read -ra ver2 00:51:14.168 05:50:08 -- scripts/common.sh@338 -- # local 'op=<' 00:51:14.168 05:50:08 -- scripts/common.sh@340 -- # ver1_l=2 00:51:14.168 05:50:08 -- scripts/common.sh@341 -- # ver2_l=1 00:51:14.168 05:50:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:14.168 05:50:08 -- scripts/common.sh@344 -- # case "$op" in 00:51:14.168 05:50:08 -- scripts/common.sh@345 -- # : 1 00:51:14.168 05:50:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:14.168 05:50:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:14.168 05:50:08 -- scripts/common.sh@365 -- # decimal 1 00:51:14.168 05:50:08 -- scripts/common.sh@353 -- # local d=1 00:51:14.168 05:50:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:14.168 05:50:08 -- scripts/common.sh@355 -- # echo 1 00:51:14.168 05:50:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:51:14.168 05:50:08 -- scripts/common.sh@366 -- # decimal 2 00:51:14.168 05:50:08 -- scripts/common.sh@353 -- # local d=2 00:51:14.168 05:50:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:14.168 05:50:08 -- scripts/common.sh@355 -- # echo 2 00:51:14.168 05:50:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:51:14.168 05:50:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:14.168 05:50:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:14.168 05:50:08 -- scripts/common.sh@368 -- # return 0 00:51:14.168 05:50:08 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:14.168 05:50:08 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:14.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:14.168 --rc genhtml_branch_coverage=1 00:51:14.168 --rc genhtml_function_coverage=1 00:51:14.168 --rc genhtml_legend=1 00:51:14.168 --rc geninfo_all_blocks=1 00:51:14.168 --rc geninfo_unexecuted_blocks=1 00:51:14.168 00:51:14.168 ' 00:51:14.168 05:50:08 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:14.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:14.168 --rc genhtml_branch_coverage=1 00:51:14.168 --rc genhtml_function_coverage=1 00:51:14.168 --rc genhtml_legend=1 00:51:14.168 --rc geninfo_all_blocks=1 00:51:14.168 --rc geninfo_unexecuted_blocks=1 00:51:14.168 00:51:14.168 ' 00:51:14.168 05:50:08 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:14.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:14.168 --rc genhtml_branch_coverage=1 00:51:14.168 --rc genhtml_function_coverage=1 00:51:14.168 --rc genhtml_legend=1 00:51:14.168 --rc geninfo_all_blocks=1 00:51:14.168 --rc geninfo_unexecuted_blocks=1 00:51:14.168 00:51:14.168 ' 00:51:14.168 05:50:08 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:14.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:14.168 --rc genhtml_branch_coverage=1 00:51:14.168 --rc genhtml_function_coverage=1 00:51:14.168 --rc genhtml_legend=1 00:51:14.168 --rc geninfo_all_blocks=1 00:51:14.168 --rc geninfo_unexecuted_blocks=1 00:51:14.168 00:51:14.168 ' 00:51:14.168 05:50:08 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:14.168 05:50:08 -- nvmf/common.sh@7 -- # uname -s 00:51:14.168 05:50:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:14.168 05:50:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:14.168 05:50:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:14.168 05:50:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:14.168 05:50:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:14.168 05:50:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:14.168 05:50:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:14.168 05:50:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:14.168 05:50:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:14.168 05:50:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:14.168 05:50:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:51:14.168 05:50:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:51:14.168 05:50:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:14.168 05:50:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:14.168 05:50:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:14.168 05:50:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:14.168 05:50:08 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:14.168 05:50:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:51:14.428 05:50:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:14.428 05:50:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:14.428 05:50:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:14.428 05:50:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:14.428 05:50:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:14.428 05:50:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:14.428 05:50:08 -- paths/export.sh@5 -- # export PATH 00:51:14.428 05:50:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:14.428 05:50:08 -- nvmf/common.sh@51 -- # : 0 00:51:14.428 05:50:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:51:14.428 05:50:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:51:14.428 05:50:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:14.428 05:50:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:14.428 05:50:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:14.428 05:50:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:51:14.428 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:51:14.428 05:50:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:51:14.428 05:50:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:51:14.428 05:50:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:51:14.428 05:50:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:51:14.428 05:50:08 -- spdk/autotest.sh@32 -- # uname -s 00:51:14.428 05:50:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:51:14.428 05:50:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:51:14.428 05:50:08 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:51:14.429 05:50:08 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:51:14.429 05:50:08 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:51:14.429 05:50:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:51:14.429 05:50:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:51:14.429 05:50:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:51:14.429 05:50:08 -- spdk/autotest.sh@48 -- # udevadm_pid=56101 00:51:14.429 05:50:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:51:14.429 05:50:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:51:14.429 05:50:08 -- pm/common@17 -- # local monitor 00:51:14.429 05:50:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:51:14.429 05:50:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:51:14.429 05:50:08 -- pm/common@25 -- # sleep 1 00:51:14.429 05:50:08 -- pm/common@21 -- # date +%s 00:51:14.429 05:50:08 -- pm/common@21 -- # date +%s 00:51:14.429 05:50:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733723408 00:51:14.429 05:50:08 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733723408 00:51:14.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733723408_collect-cpu-load.pm.log 00:51:14.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733723408_collect-vmstat.pm.log 00:51:15.367 05:50:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:51:15.367 05:50:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:51:15.367 05:50:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:15.367 05:50:09 -- common/autotest_common.sh@10 -- # set +x 00:51:15.367 05:50:09 -- spdk/autotest.sh@59 -- # create_test_list 00:51:15.367 05:50:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:51:15.367 05:50:09 -- common/autotest_common.sh@10 -- # set +x 00:51:15.367 05:50:09 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:51:15.367 05:50:09 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:51:15.367 05:50:09 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:51:15.367 05:50:09 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:51:15.367 05:50:09 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:51:15.367 05:50:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:51:15.367 05:50:09 -- common/autotest_common.sh@1457 -- # uname 00:51:15.367 05:50:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:51:15.367 05:50:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:51:15.367 05:50:09 -- common/autotest_common.sh@1477 -- # uname 00:51:15.367 05:50:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:51:15.367 05:50:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:51:15.367 05:50:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:51:15.626 lcov: LCOV version 1.15 00:51:15.626 05:50:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:51:30.508 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:51:30.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:51:42.823 05:50:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:51:42.823 05:50:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:42.823 05:50:36 -- common/autotest_common.sh@10 -- # set +x 00:51:42.823 05:50:36 -- spdk/autotest.sh@78 -- # rm -f 00:51:42.823 05:50:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:51:42.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:42.823 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:51:42.823 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:51:42.823 05:50:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:51:42.823 05:50:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:51:42.823 05:50:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:51:42.823 05:50:36 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:51:42.823 05:50:36 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:51:42.823 05:50:36 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:51:42.823 05:50:36 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:51:42.823 05:50:36 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:51:42.823 05:50:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:51:42.823 05:50:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:51:42.823 05:50:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:51:42.823 05:50:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:51:42.823 05:50:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:42.823 05:50:36 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:51:42.823 05:50:36 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:51:42.823 05:50:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:51:42.823 05:50:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:51:42.823 05:50:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:51:42.823 05:50:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:51:42.823 05:50:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:42.823 05:50:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:51:42.823 05:50:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:51:42.823 05:50:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:51:42.823 05:50:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:51:42.823 05:50:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:42.823 05:50:36 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:51:42.823 05:50:36 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:51:42.823 05:50:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:51:42.823 05:50:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:51:42.823 05:50:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:51:42.823 05:50:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:51:42.823 05:50:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:51:42.823 05:50:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:51:42.823 05:50:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:51:42.823 05:50:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:51:42.823 05:50:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:51:42.823 No valid GPT data, bailing 00:51:42.823 05:50:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:51:42.823 05:50:36 -- scripts/common.sh@394 -- # pt= 00:51:42.823 05:50:36 -- scripts/common.sh@395 -- # return 1 00:51:42.823 05:50:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:51:42.823 1+0 records in 00:51:42.823 1+0 records out 00:51:42.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423218 s, 248 MB/s 00:51:42.823 05:50:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:51:42.823 05:50:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:51:42.823 05:50:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:51:42.823 05:50:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:51:42.823 05:50:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:51:42.823 No valid GPT data, bailing 00:51:42.823 05:50:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:51:42.823 05:50:37 -- scripts/common.sh@394 -- # pt= 00:51:42.823 05:50:37 -- scripts/common.sh@395 -- # return 1 00:51:42.823 05:50:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:51:42.823 1+0 records in 00:51:42.823 1+0 records out 00:51:42.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422688 s, 248 MB/s 00:51:42.823 05:50:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:51:42.823 05:50:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:51:42.823 05:50:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:51:42.823 05:50:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:51:42.823 05:50:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:51:42.823 No valid GPT data, bailing 00:51:42.823 05:50:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:51:42.823 05:50:37 -- scripts/common.sh@394 -- # pt= 00:51:42.823 05:50:37 -- scripts/common.sh@395 -- # return 1 00:51:42.823 05:50:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:51:42.823 1+0 records in 00:51:42.823 1+0 records out 00:51:42.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442016 s, 237 MB/s 00:51:42.823 05:50:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:51:42.823 05:50:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:51:42.823 05:50:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:51:42.823 05:50:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:51:42.823 05:50:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:51:42.823 No valid GPT data, bailing 00:51:42.823 05:50:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:51:42.823 05:50:37 -- scripts/common.sh@394 -- # pt= 00:51:42.823 05:50:37 -- scripts/common.sh@395 -- # return 1 00:51:42.823 05:50:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:51:42.823 1+0 records in 00:51:42.823 1+0 records out 00:51:42.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00472419 s, 222 MB/s 00:51:42.823 05:50:37 -- spdk/autotest.sh@105 -- # sync 00:51:43.083 05:50:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:51:43.083 05:50:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:51:43.083 05:50:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:51:45.621 05:50:39 -- spdk/autotest.sh@111 -- # uname -s 00:51:45.621 05:50:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:51:45.621 05:50:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:51:45.621 05:50:39 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:51:45.879 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:45.879 Hugepages 00:51:45.879 node hugesize free / total 00:51:45.879 node0 1048576kB 0 / 0 00:51:45.879 node0 2048kB 0 / 0 00:51:45.879 00:51:45.879 Type BDF Vendor Device NUMA Driver Device Block devices 00:51:46.137 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:51:46.137 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:51:46.137 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:51:46.137 05:50:40 -- spdk/autotest.sh@117 -- # uname -s 00:51:46.137 05:50:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:51:46.137 05:50:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:51:46.137 05:50:40 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:51:47.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:47.071 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:51:47.071 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:51:47.071 05:50:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:51:48.029 05:50:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:51:48.029 05:50:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:51:48.029 05:50:42 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:51:48.029 05:50:42 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:51:48.029 05:50:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:51:48.030 05:50:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:51:48.030 05:50:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:51:48.030 05:50:42 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:51:48.030 05:50:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:51:48.287 05:50:42 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:51:48.287 05:50:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:51:48.287 05:50:42 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:51:48.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:48.545 Waiting for block devices as requested 00:51:48.545 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:51:48.803 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:51:48.803 05:50:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:51:48.803 05:50:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:51:48.803 05:50:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:51:48.803 05:50:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:51:48.803 05:50:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:51:48.803 05:50:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:51:48.803 05:50:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:51:48.803 05:50:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:51:48.803 05:50:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:51:48.803 05:50:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:51:48.803 05:50:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:51:48.803 05:50:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:51:48.803 05:50:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:51:48.803 05:50:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:51:48.803 05:50:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:51:48.803 05:50:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:51:48.803 05:50:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:51:48.803 05:50:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:51:48.803 05:50:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:51:48.803 05:50:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:51:48.803 05:50:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:51:48.803 05:50:43 -- common/autotest_common.sh@1543 -- # continue 00:51:48.803 05:50:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:51:48.803 05:50:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:51:48.803 05:50:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:51:48.803 05:50:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:51:48.803 05:50:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:51:48.803 05:50:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:51:48.803 05:50:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:51:48.803 05:50:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:51:48.803 05:50:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:51:48.803 05:50:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:51:48.803 05:50:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:51:48.803 05:50:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:51:48.803 05:50:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:51:48.803 05:50:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:51:48.803 05:50:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:51:48.803 05:50:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:51:48.803 05:50:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:51:48.803 05:50:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:51:48.803 05:50:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:51:48.803 05:50:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:51:48.803 05:50:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:51:48.803 05:50:43 -- common/autotest_common.sh@1543 -- # continue 00:51:48.803 05:50:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:51:48.803 05:50:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:48.803 05:50:43 -- common/autotest_common.sh@10 -- # set +x 00:51:48.803 05:50:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:51:48.803 05:50:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:48.803 05:50:43 -- common/autotest_common.sh@10 -- # set +x 00:51:48.803 05:50:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:51:49.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:49.739 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:51:49.739 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:51:49.739 05:50:44 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:51:49.739 05:50:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:51:49.739 05:50:44 -- common/autotest_common.sh@10 -- # set +x 00:51:49.739 05:50:44 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:51:49.739 05:50:44 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:51:49.739 05:50:44 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:51:49.739 05:50:44 -- common/autotest_common.sh@1563 -- # bdfs=() 00:51:49.739 05:50:44 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:51:49.739 05:50:44 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:51:49.739 05:50:44 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:51:49.739 05:50:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:51:49.739 05:50:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:51:49.739 05:50:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:51:49.739 05:50:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:51:49.739 05:50:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:51:49.739 05:50:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:51:49.739 05:50:44 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:51:49.739 05:50:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:51:49.739 05:50:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:51:49.739 05:50:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:51:49.739 05:50:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:51:49.739 05:50:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:51:49.739 05:50:44 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:51:49.739 05:50:44 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:51:49.739 05:50:44 -- common/autotest_common.sh@1566 -- # device=0x0010 00:51:49.739 05:50:44 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:51:49.739 05:50:44 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:51:49.739 05:50:44 -- common/autotest_common.sh@1572 -- # return 0 00:51:49.739 05:50:44 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:51:49.739 05:50:44 -- common/autotest_common.sh@1580 -- # return 0 00:51:49.739 05:50:44 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:51:49.739 05:50:44 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:51:49.739 05:50:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:51:49.739 05:50:44 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:51:49.739 05:50:44 -- spdk/autotest.sh@149 -- # timing_enter lib 00:51:49.739 05:50:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:49.739 05:50:44 -- common/autotest_common.sh@10 -- # set +x 00:51:49.739 05:50:44 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:51:49.739 05:50:44 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:51:49.739 05:50:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:49.739 05:50:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:49.739 05:50:44 -- common/autotest_common.sh@10 -- # set +x 00:51:49.998 ************************************ 00:51:49.998 START TEST env 00:51:49.998 ************************************ 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:51:49.999 * Looking for test storage... 00:51:49.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1711 -- # lcov --version 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:49.999 05:50:44 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:49.999 05:50:44 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:49.999 05:50:44 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:49.999 05:50:44 env -- scripts/common.sh@336 -- # IFS=.-: 00:51:49.999 05:50:44 env -- scripts/common.sh@336 -- # read -ra ver1 00:51:49.999 05:50:44 env -- scripts/common.sh@337 -- # IFS=.-: 00:51:49.999 05:50:44 env -- scripts/common.sh@337 -- # read -ra ver2 00:51:49.999 05:50:44 env -- scripts/common.sh@338 -- # local 'op=<' 00:51:49.999 05:50:44 env -- scripts/common.sh@340 -- # ver1_l=2 00:51:49.999 05:50:44 env -- scripts/common.sh@341 -- # ver2_l=1 00:51:49.999 05:50:44 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:49.999 05:50:44 env -- scripts/common.sh@344 -- # case "$op" in 00:51:49.999 05:50:44 env -- scripts/common.sh@345 -- # : 1 00:51:49.999 05:50:44 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:49.999 05:50:44 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:49.999 05:50:44 env -- scripts/common.sh@365 -- # decimal 1 00:51:49.999 05:50:44 env -- scripts/common.sh@353 -- # local d=1 00:51:49.999 05:50:44 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:49.999 05:50:44 env -- scripts/common.sh@355 -- # echo 1 00:51:49.999 05:50:44 env -- scripts/common.sh@365 -- # ver1[v]=1 00:51:49.999 05:50:44 env -- scripts/common.sh@366 -- # decimal 2 00:51:49.999 05:50:44 env -- scripts/common.sh@353 -- # local d=2 00:51:49.999 05:50:44 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:49.999 05:50:44 env -- scripts/common.sh@355 -- # echo 2 00:51:49.999 05:50:44 env -- scripts/common.sh@366 -- # ver2[v]=2 00:51:49.999 05:50:44 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:49.999 05:50:44 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:49.999 05:50:44 env -- scripts/common.sh@368 -- # return 0 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:49.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:49.999 --rc genhtml_branch_coverage=1 00:51:49.999 --rc genhtml_function_coverage=1 00:51:49.999 --rc genhtml_legend=1 00:51:49.999 --rc geninfo_all_blocks=1 00:51:49.999 --rc geninfo_unexecuted_blocks=1 00:51:49.999 00:51:49.999 ' 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:49.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:49.999 --rc genhtml_branch_coverage=1 00:51:49.999 --rc genhtml_function_coverage=1 00:51:49.999 --rc genhtml_legend=1 00:51:49.999 --rc geninfo_all_blocks=1 00:51:49.999 --rc geninfo_unexecuted_blocks=1 00:51:49.999 00:51:49.999 ' 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:49.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:49.999 --rc genhtml_branch_coverage=1 00:51:49.999 --rc genhtml_function_coverage=1 00:51:49.999 --rc genhtml_legend=1 00:51:49.999 --rc geninfo_all_blocks=1 00:51:49.999 --rc geninfo_unexecuted_blocks=1 00:51:49.999 00:51:49.999 ' 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:49.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:49.999 --rc genhtml_branch_coverage=1 00:51:49.999 --rc genhtml_function_coverage=1 00:51:49.999 --rc genhtml_legend=1 00:51:49.999 --rc geninfo_all_blocks=1 00:51:49.999 --rc geninfo_unexecuted_blocks=1 00:51:49.999 00:51:49.999 ' 00:51:49.999 05:50:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:49.999 05:50:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:49.999 05:50:44 env -- common/autotest_common.sh@10 -- # set +x 00:51:49.999 ************************************ 00:51:49.999 START TEST env_memory 00:51:49.999 ************************************ 00:51:49.999 05:50:44 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:51:49.999 00:51:49.999 00:51:49.999 CUnit - A unit testing framework for C - Version 2.1-3 00:51:49.999 http://cunit.sourceforge.net/ 00:51:49.999 00:51:49.999 00:51:49.999 Suite: memory 00:51:50.258 Test: alloc and free memory map ...[2024-12-09 05:50:44.589350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:51:50.258 passed 00:51:50.258 Test: mem map translation ...[2024-12-09 05:50:44.619938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:51:50.258 [2024-12-09 05:50:44.619970] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:51:50.258 [2024-12-09 05:50:44.620025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:51:50.258 [2024-12-09 05:50:44.620036] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:51:50.258 passed 00:51:50.258 Test: mem map registration ...[2024-12-09 05:50:44.683685] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:51:50.258 [2024-12-09 05:50:44.683712] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:51:50.258 passed 00:51:50.258 Test: mem map adjacent registrations ...passed 00:51:50.258 00:51:50.258 Run Summary: Type Total Ran Passed Failed Inactive 00:51:50.258 suites 1 1 n/a 0 0 00:51:50.258 tests 4 4 4 0 0 00:51:50.258 asserts 152 152 152 0 n/a 00:51:50.258 00:51:50.258 Elapsed time = 0.212 seconds 00:51:50.258 00:51:50.258 real 0m0.232s 00:51:50.258 user 0m0.218s 00:51:50.258 sys 0m0.010s 00:51:50.258 05:50:44 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:50.258 05:50:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:51:50.258 ************************************ 00:51:50.258 END TEST env_memory 00:51:50.258 ************************************ 00:51:50.258 05:50:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:51:50.258 05:50:44 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:50.258 05:50:44 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:50.258 05:50:44 env -- common/autotest_common.sh@10 -- # set +x 00:51:50.258 ************************************ 00:51:50.258 START TEST env_vtophys 00:51:50.258 ************************************ 00:51:50.258 05:50:44 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:51:50.258 EAL: lib.eal log level changed from notice to debug 00:51:50.258 EAL: Detected lcore 0 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 1 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 2 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 3 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 4 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 5 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 6 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 7 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 8 as core 0 on socket 0 00:51:50.258 EAL: Detected lcore 9 as core 0 on socket 0 00:51:50.517 EAL: Maximum logical cores by configuration: 128 00:51:50.517 EAL: Detected CPU lcores: 10 00:51:50.517 EAL: Detected NUMA nodes: 1 00:51:50.517 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:51:50.517 EAL: Detected shared linkage of DPDK 00:51:50.517 EAL: No shared files mode enabled, IPC will be disabled 00:51:50.517 EAL: Selected IOVA mode 'PA' 00:51:50.517 EAL: Probing VFIO support... 00:51:50.517 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:51:50.517 EAL: VFIO modules not loaded, skipping VFIO support... 00:51:50.517 EAL: Ask a virtual area of 0x2e000 bytes 00:51:50.517 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:51:50.517 EAL: Setting up physically contiguous memory... 00:51:50.517 EAL: Setting maximum number of open files to 524288 00:51:50.517 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:51:50.517 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:51:50.517 EAL: Ask a virtual area of 0x61000 bytes 00:51:50.517 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:51:50.517 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:51:50.517 EAL: Ask a virtual area of 0x400000000 bytes 00:51:50.517 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:51:50.517 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:51:50.517 EAL: Ask a virtual area of 0x61000 bytes 00:51:50.517 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:51:50.517 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:51:50.517 EAL: Ask a virtual area of 0x400000000 bytes 00:51:50.517 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:51:50.517 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:51:50.517 EAL: Ask a virtual area of 0x61000 bytes 00:51:50.517 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:51:50.517 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:51:50.517 EAL: Ask a virtual area of 0x400000000 bytes 00:51:50.517 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:51:50.517 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:51:50.517 EAL: Ask a virtual area of 0x61000 bytes 00:51:50.517 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:51:50.517 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:51:50.517 EAL: Ask a virtual area of 0x400000000 bytes 00:51:50.517 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:51:50.517 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:51:50.517 EAL: Hugepages will be freed exactly as allocated. 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.517 EAL: TSC frequency is ~2200000 KHz 00:51:50.517 EAL: Main lcore 0 is ready (tid=7f474662aa00;cpuset=[0]) 00:51:50.517 EAL: Trying to obtain current memory policy. 00:51:50.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.517 EAL: Restoring previous memory policy: 0 00:51:50.517 EAL: request: mp_malloc_sync 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.517 EAL: Heap on socket 0 was expanded by 2MB 00:51:50.517 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:51:50.517 EAL: No PCI address specified using 'addr=' in: bus=pci 00:51:50.517 EAL: Mem event callback 'spdk:(nil)' registered 00:51:50.517 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:51:50.517 00:51:50.517 00:51:50.517 CUnit - A unit testing framework for C - Version 2.1-3 00:51:50.517 http://cunit.sourceforge.net/ 00:51:50.517 00:51:50.517 00:51:50.517 Suite: components_suite 00:51:50.517 Test: vtophys_malloc_test ...passed 00:51:50.517 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:51:50.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.517 EAL: Restoring previous memory policy: 4 00:51:50.517 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.517 EAL: request: mp_malloc_sync 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.517 EAL: Heap on socket 0 was expanded by 4MB 00:51:50.517 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.517 EAL: request: mp_malloc_sync 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.517 EAL: Heap on socket 0 was shrunk by 4MB 00:51:50.517 EAL: Trying to obtain current memory policy. 00:51:50.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.517 EAL: Restoring previous memory policy: 4 00:51:50.517 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.517 EAL: request: mp_malloc_sync 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.517 EAL: Heap on socket 0 was expanded by 6MB 00:51:50.517 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.517 EAL: request: mp_malloc_sync 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.517 EAL: Heap on socket 0 was shrunk by 6MB 00:51:50.517 EAL: Trying to obtain current memory policy. 00:51:50.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.517 EAL: Restoring previous memory policy: 4 00:51:50.517 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.517 EAL: request: mp_malloc_sync 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.517 EAL: Heap on socket 0 was expanded by 10MB 00:51:50.517 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.517 EAL: request: mp_malloc_sync 00:51:50.517 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was shrunk by 10MB 00:51:50.518 EAL: Trying to obtain current memory policy. 00:51:50.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.518 EAL: Restoring previous memory policy: 4 00:51:50.518 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.518 EAL: request: mp_malloc_sync 00:51:50.518 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was expanded by 18MB 00:51:50.518 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.518 EAL: request: mp_malloc_sync 00:51:50.518 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was shrunk by 18MB 00:51:50.518 EAL: Trying to obtain current memory policy. 00:51:50.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.518 EAL: Restoring previous memory policy: 4 00:51:50.518 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.518 EAL: request: mp_malloc_sync 00:51:50.518 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was expanded by 34MB 00:51:50.518 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.518 EAL: request: mp_malloc_sync 00:51:50.518 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was shrunk by 34MB 00:51:50.518 EAL: Trying to obtain current memory policy. 00:51:50.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.518 EAL: Restoring previous memory policy: 4 00:51:50.518 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.518 EAL: request: mp_malloc_sync 00:51:50.518 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was expanded by 66MB 00:51:50.518 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.518 EAL: request: mp_malloc_sync 00:51:50.518 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was shrunk by 66MB 00:51:50.518 EAL: Trying to obtain current memory policy. 00:51:50.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.518 EAL: Restoring previous memory policy: 4 00:51:50.518 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.518 EAL: request: mp_malloc_sync 00:51:50.518 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was expanded by 130MB 00:51:50.518 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.518 EAL: request: mp_malloc_sync 00:51:50.518 EAL: No shared files mode enabled, IPC is disabled 00:51:50.518 EAL: Heap on socket 0 was shrunk by 130MB 00:51:50.518 EAL: Trying to obtain current memory policy. 00:51:50.518 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.775 EAL: Restoring previous memory policy: 4 00:51:50.775 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.775 EAL: request: mp_malloc_sync 00:51:50.775 EAL: No shared files mode enabled, IPC is disabled 00:51:50.775 EAL: Heap on socket 0 was expanded by 258MB 00:51:50.775 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.775 EAL: request: mp_malloc_sync 00:51:50.775 EAL: No shared files mode enabled, IPC is disabled 00:51:50.775 EAL: Heap on socket 0 was shrunk by 258MB 00:51:50.775 EAL: Trying to obtain current memory policy. 00:51:50.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:50.775 EAL: Restoring previous memory policy: 4 00:51:50.775 EAL: Calling mem event callback 'spdk:(nil)' 00:51:50.775 EAL: request: mp_malloc_sync 00:51:50.775 EAL: No shared files mode enabled, IPC is disabled 00:51:50.775 EAL: Heap on socket 0 was expanded by 514MB 00:51:50.775 EAL: Calling mem event callback 'spdk:(nil)' 00:51:51.033 EAL: request: mp_malloc_sync 00:51:51.033 EAL: No shared files mode enabled, IPC is disabled 00:51:51.033 EAL: Heap on socket 0 was shrunk by 514MB 00:51:51.033 EAL: Trying to obtain current memory policy. 00:51:51.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:51:51.033 EAL: Restoring previous memory policy: 4 00:51:51.033 EAL: Calling mem event callback 'spdk:(nil)' 00:51:51.033 EAL: request: mp_malloc_sync 00:51:51.033 EAL: No shared files mode enabled, IPC is disabled 00:51:51.033 EAL: Heap on socket 0 was expanded by 1026MB 00:51:51.292 EAL: Calling mem event callback 'spdk:(nil)' 00:51:51.292 passed 00:51:51.292 00:51:51.292 Run Summary: Type Total Ran Passed Failed Inactive 00:51:51.292 suites 1 1 n/a 0 0 00:51:51.292 tests 2 2 2 0 0 00:51:51.292 asserts 5505 5505 5505 0 n/a 00:51:51.292 00:51:51.292 Elapsed time = 0.718 seconds 00:51:51.292 EAL: request: mp_malloc_sync 00:51:51.292 EAL: No shared files mode enabled, IPC is disabled 00:51:51.292 EAL: Heap on socket 0 was shrunk by 1026MB 00:51:51.292 EAL: Calling mem event callback 'spdk:(nil)' 00:51:51.292 EAL: request: mp_malloc_sync 00:51:51.292 EAL: No shared files mode enabled, IPC is disabled 00:51:51.292 EAL: Heap on socket 0 was shrunk by 2MB 00:51:51.292 EAL: No shared files mode enabled, IPC is disabled 00:51:51.292 EAL: No shared files mode enabled, IPC is disabled 00:51:51.292 EAL: No shared files mode enabled, IPC is disabled 00:51:51.292 00:51:51.292 real 0m0.918s 00:51:51.292 user 0m0.480s 00:51:51.292 sys 0m0.311s 00:51:51.292 05:50:45 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:51.292 ************************************ 00:51:51.292 END TEST env_vtophys 00:51:51.292 ************************************ 00:51:51.293 05:50:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:51:51.293 05:50:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:51:51.293 05:50:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:51.293 05:50:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:51.293 05:50:45 env -- common/autotest_common.sh@10 -- # set +x 00:51:51.293 ************************************ 00:51:51.293 START TEST env_pci 00:51:51.293 ************************************ 00:51:51.293 05:50:45 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:51:51.293 00:51:51.293 00:51:51.293 CUnit - A unit testing framework for C - Version 2.1-3 00:51:51.293 http://cunit.sourceforge.net/ 00:51:51.293 00:51:51.293 00:51:51.293 Suite: pci 00:51:51.293 Test: pci_hook ...[2024-12-09 05:50:45.805359] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58278 has claimed it 00:51:51.293 passed 00:51:51.293 00:51:51.293 Run Summary: Type Total Ran Passed Failed Inactive 00:51:51.293 suites 1 1 n/a 0 0 00:51:51.293 tests 1 1 1 0 0 00:51:51.293 asserts 25 25 25 0 n/a 00:51:51.293 00:51:51.293 Elapsed time = 0.002 seconds 00:51:51.293 EAL: Cannot find device (10000:00:01.0) 00:51:51.293 EAL: Failed to attach device on primary process 00:51:51.293 00:51:51.293 real 0m0.022s 00:51:51.293 user 0m0.006s 00:51:51.293 sys 0m0.015s 00:51:51.293 05:50:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:51.293 ************************************ 00:51:51.293 END TEST env_pci 00:51:51.293 ************************************ 00:51:51.293 05:50:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:51:51.293 05:50:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:51:51.293 05:50:45 env -- env/env.sh@15 -- # uname 00:51:51.293 05:50:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:51:51.293 05:50:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:51:51.293 05:50:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:51:51.293 05:50:45 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:51:51.293 05:50:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:51.293 05:50:45 env -- common/autotest_common.sh@10 -- # set +x 00:51:51.293 ************************************ 00:51:51.293 START TEST env_dpdk_post_init 00:51:51.293 ************************************ 00:51:51.293 05:50:45 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:51:51.552 EAL: Detected CPU lcores: 10 00:51:51.552 EAL: Detected NUMA nodes: 1 00:51:51.552 EAL: Detected shared linkage of DPDK 00:51:51.552 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:51:51.552 EAL: Selected IOVA mode 'PA' 00:51:51.552 TELEMETRY: No legacy callbacks, legacy socket not created 00:51:51.552 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:51:51.552 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:51:51.552 Starting DPDK initialization... 00:51:51.552 Starting SPDK post initialization... 00:51:51.552 SPDK NVMe probe 00:51:51.552 Attaching to 0000:00:10.0 00:51:51.552 Attaching to 0000:00:11.0 00:51:51.552 Attached to 0000:00:10.0 00:51:51.552 Attached to 0000:00:11.0 00:51:51.552 Cleaning up... 00:51:51.552 00:51:51.552 real 0m0.182s 00:51:51.552 user 0m0.050s 00:51:51.552 sys 0m0.032s 00:51:51.552 05:50:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:51.552 ************************************ 00:51:51.552 END TEST env_dpdk_post_init 00:51:51.552 ************************************ 00:51:51.552 05:50:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:51:51.552 05:50:46 env -- env/env.sh@26 -- # uname 00:51:51.552 05:50:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:51:51.552 05:50:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:51:51.552 05:50:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:51.552 05:50:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:51.552 05:50:46 env -- common/autotest_common.sh@10 -- # set +x 00:51:51.552 ************************************ 00:51:51.552 START TEST env_mem_callbacks 00:51:51.552 ************************************ 00:51:51.552 05:50:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:51:51.552 EAL: Detected CPU lcores: 10 00:51:51.552 EAL: Detected NUMA nodes: 1 00:51:51.552 EAL: Detected shared linkage of DPDK 00:51:51.811 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:51:51.811 EAL: Selected IOVA mode 'PA' 00:51:51.811 TELEMETRY: No legacy callbacks, legacy socket not created 00:51:51.811 00:51:51.812 00:51:51.812 CUnit - A unit testing framework for C - Version 2.1-3 00:51:51.812 http://cunit.sourceforge.net/ 00:51:51.812 00:51:51.812 00:51:51.812 Suite: memory 00:51:51.812 Test: test ... 00:51:51.812 register 0x200000200000 2097152 00:51:51.812 malloc 3145728 00:51:51.812 register 0x200000400000 4194304 00:51:51.812 buf 0x200000500000 len 3145728 PASSED 00:51:51.812 malloc 64 00:51:51.812 buf 0x2000004fff40 len 64 PASSED 00:51:51.812 malloc 4194304 00:51:51.812 register 0x200000800000 6291456 00:51:51.812 buf 0x200000a00000 len 4194304 PASSED 00:51:51.812 free 0x200000500000 3145728 00:51:51.812 free 0x2000004fff40 64 00:51:51.812 unregister 0x200000400000 4194304 PASSED 00:51:51.812 free 0x200000a00000 4194304 00:51:51.812 unregister 0x200000800000 6291456 PASSED 00:51:51.812 malloc 8388608 00:51:51.812 register 0x200000400000 10485760 00:51:51.812 buf 0x200000600000 len 8388608 PASSED 00:51:51.812 free 0x200000600000 8388608 00:51:51.812 unregister 0x200000400000 10485760 PASSED 00:51:51.812 passed 00:51:51.812 00:51:51.812 Run Summary: Type Total Ran Passed Failed Inactive 00:51:51.812 suites 1 1 n/a 0 0 00:51:51.812 tests 1 1 1 0 0 00:51:51.812 asserts 15 15 15 0 n/a 00:51:51.812 00:51:51.812 Elapsed time = 0.009 seconds 00:51:51.812 00:51:51.812 real 0m0.144s 00:51:51.812 user 0m0.022s 00:51:51.812 sys 0m0.021s 00:51:51.812 05:50:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:51.812 05:50:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:51:51.812 ************************************ 00:51:51.812 END TEST env_mem_callbacks 00:51:51.812 ************************************ 00:51:51.812 00:51:51.812 real 0m1.965s 00:51:51.812 user 0m0.968s 00:51:51.812 sys 0m0.650s 00:51:51.812 05:50:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:51.812 ************************************ 00:51:51.812 05:50:46 env -- common/autotest_common.sh@10 -- # set +x 00:51:51.812 END TEST env 00:51:51.812 ************************************ 00:51:51.812 05:50:46 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:51:51.812 05:50:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:51.812 05:50:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:51.812 05:50:46 -- common/autotest_common.sh@10 -- # set +x 00:51:51.812 ************************************ 00:51:51.812 START TEST rpc 00:51:51.812 ************************************ 00:51:51.812 05:50:46 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:51:52.071 * Looking for test storage... 00:51:52.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:51:52.071 05:50:46 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:52.071 05:50:46 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:51:52.071 05:50:46 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:52.071 05:50:46 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:52.071 05:50:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:52.071 05:50:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:52.071 05:50:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:52.071 05:50:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:51:52.071 05:50:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:51:52.071 05:50:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:51:52.071 05:50:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:51:52.071 05:50:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:51:52.071 05:50:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:51:52.071 05:50:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:51:52.071 05:50:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:52.071 05:50:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:51:52.071 05:50:46 rpc -- scripts/common.sh@345 -- # : 1 00:51:52.071 05:50:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:52.071 05:50:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:52.071 05:50:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:51:52.071 05:50:46 rpc -- scripts/common.sh@353 -- # local d=1 00:51:52.071 05:50:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:52.071 05:50:46 rpc -- scripts/common.sh@355 -- # echo 1 00:51:52.071 05:50:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:51:52.071 05:50:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:51:52.071 05:50:46 rpc -- scripts/common.sh@353 -- # local d=2 00:51:52.071 05:50:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:52.071 05:50:46 rpc -- scripts/common.sh@355 -- # echo 2 00:51:52.071 05:50:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:51:52.071 05:50:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:52.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:52.071 05:50:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:52.071 05:50:46 rpc -- scripts/common.sh@368 -- # return 0 00:51:52.071 05:50:46 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:52.071 05:50:46 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:52.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:52.071 --rc genhtml_branch_coverage=1 00:51:52.071 --rc genhtml_function_coverage=1 00:51:52.071 --rc genhtml_legend=1 00:51:52.071 --rc geninfo_all_blocks=1 00:51:52.071 --rc geninfo_unexecuted_blocks=1 00:51:52.071 00:51:52.071 ' 00:51:52.071 05:50:46 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:52.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:52.071 --rc genhtml_branch_coverage=1 00:51:52.071 --rc genhtml_function_coverage=1 00:51:52.071 --rc genhtml_legend=1 00:51:52.071 --rc geninfo_all_blocks=1 00:51:52.071 --rc geninfo_unexecuted_blocks=1 00:51:52.071 00:51:52.071 ' 00:51:52.072 05:50:46 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:52.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:52.072 --rc genhtml_branch_coverage=1 00:51:52.072 --rc genhtml_function_coverage=1 00:51:52.072 --rc genhtml_legend=1 00:51:52.072 --rc geninfo_all_blocks=1 00:51:52.072 --rc geninfo_unexecuted_blocks=1 00:51:52.072 00:51:52.072 ' 00:51:52.072 05:50:46 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:52.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:52.072 --rc genhtml_branch_coverage=1 00:51:52.072 --rc genhtml_function_coverage=1 00:51:52.072 --rc genhtml_legend=1 00:51:52.072 --rc geninfo_all_blocks=1 00:51:52.072 --rc geninfo_unexecuted_blocks=1 00:51:52.072 00:51:52.072 ' 00:51:52.072 05:50:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58395 00:51:52.072 05:50:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:51:52.072 05:50:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58395 00:51:52.072 05:50:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 58395 ']' 00:51:52.072 05:50:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:52.072 05:50:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:51:52.072 05:50:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:52.072 05:50:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:51:52.072 05:50:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:51:52.072 05:50:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:51:52.072 [2024-12-09 05:50:46.626223] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:51:52.072 [2024-12-09 05:50:46.626322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58395 ] 00:51:52.331 [2024-12-09 05:50:46.778081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:52.331 [2024-12-09 05:50:46.818233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:51:52.331 [2024-12-09 05:50:46.818294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58395' to capture a snapshot of events at runtime. 00:51:52.331 [2024-12-09 05:50:46.818307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:52.331 [2024-12-09 05:50:46.818317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:52.331 [2024-12-09 05:50:46.818325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58395 for offline analysis/debug. 00:51:52.331 [2024-12-09 05:50:46.818773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:51:52.590 05:50:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:51:52.590 05:50:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:51:52.590 05:50:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:51:52.590 05:50:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:51:52.590 05:50:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:51:52.590 05:50:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:51:52.590 05:50:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:52.590 05:50:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:52.590 05:50:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:51:52.591 ************************************ 00:51:52.591 START TEST rpc_integrity 00:51:52.591 ************************************ 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.591 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:51:52.591 { 00:51:52.591 "aliases": [ 00:51:52.591 "8c85fa57-aabc-4df1-b92c-1254f8fcec5b" 00:51:52.591 ], 00:51:52.591 "assigned_rate_limits": { 00:51:52.591 "r_mbytes_per_sec": 0, 00:51:52.591 "rw_ios_per_sec": 0, 00:51:52.591 "rw_mbytes_per_sec": 0, 00:51:52.591 "w_mbytes_per_sec": 0 00:51:52.591 }, 00:51:52.591 "block_size": 512, 00:51:52.591 "claimed": false, 00:51:52.591 "driver_specific": {}, 00:51:52.591 "memory_domains": [ 00:51:52.591 { 00:51:52.591 "dma_device_id": "system", 00:51:52.591 "dma_device_type": 1 00:51:52.591 }, 00:51:52.591 { 00:51:52.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:51:52.591 "dma_device_type": 2 00:51:52.591 } 00:51:52.591 ], 00:51:52.591 "name": "Malloc0", 00:51:52.591 "num_blocks": 16384, 00:51:52.591 "product_name": "Malloc disk", 00:51:52.591 "supported_io_types": { 00:51:52.591 "abort": true, 00:51:52.591 "compare": false, 00:51:52.591 "compare_and_write": false, 00:51:52.591 "copy": true, 00:51:52.591 "flush": true, 00:51:52.591 "get_zone_info": false, 00:51:52.591 "nvme_admin": false, 00:51:52.591 "nvme_io": false, 00:51:52.591 "nvme_io_md": false, 00:51:52.591 "nvme_iov_md": false, 00:51:52.591 "read": true, 00:51:52.591 "reset": true, 00:51:52.591 "seek_data": false, 00:51:52.591 "seek_hole": false, 00:51:52.591 "unmap": true, 00:51:52.591 "write": true, 00:51:52.591 "write_zeroes": true, 00:51:52.591 "zcopy": true, 00:51:52.591 "zone_append": false, 00:51:52.591 "zone_management": false 00:51:52.591 }, 00:51:52.591 "uuid": "8c85fa57-aabc-4df1-b92c-1254f8fcec5b", 00:51:52.591 "zoned": false 00:51:52.591 } 00:51:52.591 ]' 00:51:52.591 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:51:52.851 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:51:52.851 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:51:52.851 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.851 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.851 [2024-12-09 05:50:47.185448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:51:52.851 [2024-12-09 05:50:47.185499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:51:52.851 [2024-12-09 05:50:47.185514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d363e0 00:51:52.851 [2024-12-09 05:50:47.185521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:51:52.851 [2024-12-09 05:50:47.187068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:51:52.851 [2024-12-09 05:50:47.187098] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:51:52.851 Passthru0 00:51:52.851 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.851 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:51:52.851 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.851 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.851 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.851 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:51:52.851 { 00:51:52.851 "aliases": [ 00:51:52.851 "8c85fa57-aabc-4df1-b92c-1254f8fcec5b" 00:51:52.851 ], 00:51:52.851 "assigned_rate_limits": { 00:51:52.851 "r_mbytes_per_sec": 0, 00:51:52.851 "rw_ios_per_sec": 0, 00:51:52.851 "rw_mbytes_per_sec": 0, 00:51:52.851 "w_mbytes_per_sec": 0 00:51:52.851 }, 00:51:52.851 "block_size": 512, 00:51:52.851 "claim_type": "exclusive_write", 00:51:52.851 "claimed": true, 00:51:52.851 "driver_specific": {}, 00:51:52.852 "memory_domains": [ 00:51:52.852 { 00:51:52.852 "dma_device_id": "system", 00:51:52.852 "dma_device_type": 1 00:51:52.852 }, 00:51:52.852 { 00:51:52.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:51:52.852 "dma_device_type": 2 00:51:52.852 } 00:51:52.852 ], 00:51:52.852 "name": "Malloc0", 00:51:52.852 "num_blocks": 16384, 00:51:52.852 "product_name": "Malloc disk", 00:51:52.852 "supported_io_types": { 00:51:52.852 "abort": true, 00:51:52.852 "compare": false, 00:51:52.852 "compare_and_write": false, 00:51:52.852 "copy": true, 00:51:52.852 "flush": true, 00:51:52.852 "get_zone_info": false, 00:51:52.852 "nvme_admin": false, 00:51:52.852 "nvme_io": false, 00:51:52.852 "nvme_io_md": false, 00:51:52.852 "nvme_iov_md": false, 00:51:52.852 "read": true, 00:51:52.852 "reset": true, 00:51:52.852 "seek_data": false, 00:51:52.852 "seek_hole": false, 00:51:52.852 "unmap": true, 00:51:52.852 "write": true, 00:51:52.852 "write_zeroes": true, 00:51:52.852 "zcopy": true, 00:51:52.852 "zone_append": false, 00:51:52.852 "zone_management": false 00:51:52.852 }, 00:51:52.852 "uuid": "8c85fa57-aabc-4df1-b92c-1254f8fcec5b", 00:51:52.852 "zoned": false 00:51:52.852 }, 00:51:52.852 { 00:51:52.852 "aliases": [ 00:51:52.852 "9dc7b1b1-cd62-5ce5-8950-748470e9c766" 00:51:52.852 ], 00:51:52.852 "assigned_rate_limits": { 00:51:52.852 "r_mbytes_per_sec": 0, 00:51:52.852 "rw_ios_per_sec": 0, 00:51:52.852 "rw_mbytes_per_sec": 0, 00:51:52.852 "w_mbytes_per_sec": 0 00:51:52.852 }, 00:51:52.852 "block_size": 512, 00:51:52.852 "claimed": false, 00:51:52.852 "driver_specific": { 00:51:52.852 "passthru": { 00:51:52.852 "base_bdev_name": "Malloc0", 00:51:52.852 "name": "Passthru0" 00:51:52.852 } 00:51:52.852 }, 00:51:52.852 "memory_domains": [ 00:51:52.852 { 00:51:52.852 "dma_device_id": "system", 00:51:52.852 "dma_device_type": 1 00:51:52.852 }, 00:51:52.852 { 00:51:52.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:51:52.852 "dma_device_type": 2 00:51:52.852 } 00:51:52.852 ], 00:51:52.852 "name": "Passthru0", 00:51:52.852 "num_blocks": 16384, 00:51:52.852 "product_name": "passthru", 00:51:52.852 "supported_io_types": { 00:51:52.852 "abort": true, 00:51:52.852 "compare": false, 00:51:52.852 "compare_and_write": false, 00:51:52.852 "copy": true, 00:51:52.852 "flush": true, 00:51:52.852 "get_zone_info": false, 00:51:52.852 "nvme_admin": false, 00:51:52.852 "nvme_io": false, 00:51:52.852 "nvme_io_md": false, 00:51:52.852 "nvme_iov_md": false, 00:51:52.852 "read": true, 00:51:52.852 "reset": true, 00:51:52.852 "seek_data": false, 00:51:52.852 "seek_hole": false, 00:51:52.852 "unmap": true, 00:51:52.852 "write": true, 00:51:52.852 "write_zeroes": true, 00:51:52.852 "zcopy": true, 00:51:52.852 "zone_append": false, 00:51:52.852 "zone_management": false 00:51:52.852 }, 00:51:52.852 "uuid": "9dc7b1b1-cd62-5ce5-8950-748470e9c766", 00:51:52.852 "zoned": false 00:51:52.852 } 00:51:52.852 ]' 00:51:52.852 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:51:52.852 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:51:52.852 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.852 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.852 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.852 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:51:52.852 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:51:52.852 05:50:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:51:52.852 00:51:52.852 real 0m0.324s 00:51:52.852 user 0m0.219s 00:51:52.852 sys 0m0.030s 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:52.852 05:50:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:52.852 ************************************ 00:51:52.852 END TEST rpc_integrity 00:51:52.852 ************************************ 00:51:52.852 05:50:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:51:52.852 05:50:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:52.852 05:50:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:52.852 05:50:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:51:52.852 ************************************ 00:51:52.852 START TEST rpc_plugins 00:51:52.852 ************************************ 00:51:52.852 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:51:52.852 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:51:52.852 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.852 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:51:52.852 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:52.852 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:51:52.852 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:51:52.852 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:52.852 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:51:53.111 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.111 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:51:53.111 { 00:51:53.111 "aliases": [ 00:51:53.111 "7cd316af-6c63-410c-af8b-a3b16633acaa" 00:51:53.111 ], 00:51:53.111 "assigned_rate_limits": { 00:51:53.111 "r_mbytes_per_sec": 0, 00:51:53.111 "rw_ios_per_sec": 0, 00:51:53.111 "rw_mbytes_per_sec": 0, 00:51:53.111 "w_mbytes_per_sec": 0 00:51:53.111 }, 00:51:53.111 "block_size": 4096, 00:51:53.111 "claimed": false, 00:51:53.111 "driver_specific": {}, 00:51:53.111 "memory_domains": [ 00:51:53.111 { 00:51:53.111 "dma_device_id": "system", 00:51:53.111 "dma_device_type": 1 00:51:53.111 }, 00:51:53.111 { 00:51:53.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:51:53.111 "dma_device_type": 2 00:51:53.111 } 00:51:53.111 ], 00:51:53.111 "name": "Malloc1", 00:51:53.111 "num_blocks": 256, 00:51:53.111 "product_name": "Malloc disk", 00:51:53.111 "supported_io_types": { 00:51:53.111 "abort": true, 00:51:53.111 "compare": false, 00:51:53.111 "compare_and_write": false, 00:51:53.111 "copy": true, 00:51:53.111 "flush": true, 00:51:53.111 "get_zone_info": false, 00:51:53.111 "nvme_admin": false, 00:51:53.111 "nvme_io": false, 00:51:53.111 "nvme_io_md": false, 00:51:53.111 "nvme_iov_md": false, 00:51:53.111 "read": true, 00:51:53.111 "reset": true, 00:51:53.111 "seek_data": false, 00:51:53.111 "seek_hole": false, 00:51:53.111 "unmap": true, 00:51:53.111 "write": true, 00:51:53.111 "write_zeroes": true, 00:51:53.111 "zcopy": true, 00:51:53.111 "zone_append": false, 00:51:53.111 "zone_management": false 00:51:53.111 }, 00:51:53.111 "uuid": "7cd316af-6c63-410c-af8b-a3b16633acaa", 00:51:53.111 "zoned": false 00:51:53.111 } 00:51:53.111 ]' 00:51:53.111 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:51:53.111 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:51:53.112 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:51:53.112 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.112 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:51:53.112 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.112 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:51:53.112 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.112 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:51:53.112 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.112 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:51:53.112 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:51:53.112 05:50:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:51:53.112 00:51:53.112 real 0m0.157s 00:51:53.112 user 0m0.101s 00:51:53.112 sys 0m0.020s 00:51:53.112 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:53.112 ************************************ 00:51:53.112 END TEST rpc_plugins 00:51:53.112 05:50:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:51:53.112 ************************************ 00:51:53.112 05:50:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:51:53.112 05:50:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:53.112 05:50:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:53.112 05:50:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:51:53.112 ************************************ 00:51:53.112 START TEST rpc_trace_cmd_test 00:51:53.112 ************************************ 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:51:53.112 "bdev": { 00:51:53.112 "mask": "0x8", 00:51:53.112 "tpoint_mask": "0xffffffffffffffff" 00:51:53.112 }, 00:51:53.112 "bdev_nvme": { 00:51:53.112 "mask": "0x4000", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "bdev_raid": { 00:51:53.112 "mask": "0x20000", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "blob": { 00:51:53.112 "mask": "0x10000", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "blobfs": { 00:51:53.112 "mask": "0x80", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "dsa": { 00:51:53.112 "mask": "0x200", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "ftl": { 00:51:53.112 "mask": "0x40", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "iaa": { 00:51:53.112 "mask": "0x1000", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "iscsi_conn": { 00:51:53.112 "mask": "0x2", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "nvme_pcie": { 00:51:53.112 "mask": "0x800", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "nvme_tcp": { 00:51:53.112 "mask": "0x2000", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "nvmf_rdma": { 00:51:53.112 "mask": "0x10", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "nvmf_tcp": { 00:51:53.112 "mask": "0x20", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "scheduler": { 00:51:53.112 "mask": "0x40000", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "scsi": { 00:51:53.112 "mask": "0x4", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "sock": { 00:51:53.112 "mask": "0x8000", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "thread": { 00:51:53.112 "mask": "0x400", 00:51:53.112 "tpoint_mask": "0x0" 00:51:53.112 }, 00:51:53.112 "tpoint_group_mask": "0x8", 00:51:53.112 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58395" 00:51:53.112 }' 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:51:53.112 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:51:53.371 00:51:53.371 real 0m0.273s 00:51:53.371 user 0m0.237s 00:51:53.371 sys 0m0.026s 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:53.371 ************************************ 00:51:53.371 END TEST rpc_trace_cmd_test 00:51:53.371 05:50:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:51:53.371 ************************************ 00:51:53.371 05:50:47 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:51:53.371 05:50:47 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:51:53.371 05:50:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:53.371 05:50:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:53.371 05:50:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:51:53.371 ************************************ 00:51:53.371 START TEST go_rpc 00:51:53.371 ************************************ 00:51:53.371 05:50:47 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:51:53.371 05:50:47 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:51:53.631 05:50:47 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:51:53.631 05:50:47 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:51:53.631 05:50:48 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.631 05:50:48 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:53.631 05:50:48 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["427cf107-7a35-4c21-863d-708210df0df0"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"427cf107-7a35-4c21-863d-708210df0df0","zoned":false}]' 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:51:53.631 05:50:48 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.631 05:50:48 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:53.631 05:50:48 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:51:53.631 05:50:48 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:51:53.631 00:51:53.631 real 0m0.223s 00:51:53.631 user 0m0.157s 00:51:53.631 sys 0m0.031s 00:51:53.631 05:50:48 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:53.631 05:50:48 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:53.631 ************************************ 00:51:53.631 END TEST go_rpc 00:51:53.631 ************************************ 00:51:53.631 05:50:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:51:53.631 05:50:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:51:53.631 05:50:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:53.631 05:50:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:53.631 05:50:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:51:53.631 ************************************ 00:51:53.631 START TEST rpc_daemon_integrity 00:51:53.631 ************************************ 00:51:53.631 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:51:53.891 { 00:51:53.891 "aliases": [ 00:51:53.891 "f03a6ce4-4337-48ee-bc98-fd61459b3a91" 00:51:53.891 ], 00:51:53.891 "assigned_rate_limits": { 00:51:53.891 "r_mbytes_per_sec": 0, 00:51:53.891 "rw_ios_per_sec": 0, 00:51:53.891 "rw_mbytes_per_sec": 0, 00:51:53.891 "w_mbytes_per_sec": 0 00:51:53.891 }, 00:51:53.891 "block_size": 512, 00:51:53.891 "claimed": false, 00:51:53.891 "driver_specific": {}, 00:51:53.891 "memory_domains": [ 00:51:53.891 { 00:51:53.891 "dma_device_id": "system", 00:51:53.891 "dma_device_type": 1 00:51:53.891 }, 00:51:53.891 { 00:51:53.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:51:53.891 "dma_device_type": 2 00:51:53.891 } 00:51:53.891 ], 00:51:53.891 "name": "Malloc3", 00:51:53.891 "num_blocks": 16384, 00:51:53.891 "product_name": "Malloc disk", 00:51:53.891 "supported_io_types": { 00:51:53.891 "abort": true, 00:51:53.891 "compare": false, 00:51:53.891 "compare_and_write": false, 00:51:53.891 "copy": true, 00:51:53.891 "flush": true, 00:51:53.891 "get_zone_info": false, 00:51:53.891 "nvme_admin": false, 00:51:53.891 "nvme_io": false, 00:51:53.891 "nvme_io_md": false, 00:51:53.891 "nvme_iov_md": false, 00:51:53.891 "read": true, 00:51:53.891 "reset": true, 00:51:53.891 "seek_data": false, 00:51:53.891 "seek_hole": false, 00:51:53.891 "unmap": true, 00:51:53.891 "write": true, 00:51:53.891 "write_zeroes": true, 00:51:53.891 "zcopy": true, 00:51:53.891 "zone_append": false, 00:51:53.891 "zone_management": false 00:51:53.891 }, 00:51:53.891 "uuid": "f03a6ce4-4337-48ee-bc98-fd61459b3a91", 00:51:53.891 "zoned": false 00:51:53.891 } 00:51:53.891 ]' 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:53.891 [2024-12-09 05:50:48.365831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:51:53.891 [2024-12-09 05:50:48.365883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:51:53.891 [2024-12-09 05:50:48.365900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d23be0 00:51:53.891 [2024-12-09 05:50:48.365909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:51:53.891 [2024-12-09 05:50:48.367204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:51:53.891 [2024-12-09 05:50:48.367233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:51:53.891 Passthru0 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:51:53.891 { 00:51:53.891 "aliases": [ 00:51:53.891 "f03a6ce4-4337-48ee-bc98-fd61459b3a91" 00:51:53.891 ], 00:51:53.891 "assigned_rate_limits": { 00:51:53.891 "r_mbytes_per_sec": 0, 00:51:53.891 "rw_ios_per_sec": 0, 00:51:53.891 "rw_mbytes_per_sec": 0, 00:51:53.891 "w_mbytes_per_sec": 0 00:51:53.891 }, 00:51:53.891 "block_size": 512, 00:51:53.891 "claim_type": "exclusive_write", 00:51:53.891 "claimed": true, 00:51:53.891 "driver_specific": {}, 00:51:53.891 "memory_domains": [ 00:51:53.891 { 00:51:53.891 "dma_device_id": "system", 00:51:53.891 "dma_device_type": 1 00:51:53.891 }, 00:51:53.891 { 00:51:53.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:51:53.891 "dma_device_type": 2 00:51:53.891 } 00:51:53.891 ], 00:51:53.891 "name": "Malloc3", 00:51:53.891 "num_blocks": 16384, 00:51:53.891 "product_name": "Malloc disk", 00:51:53.891 "supported_io_types": { 00:51:53.891 "abort": true, 00:51:53.891 "compare": false, 00:51:53.891 "compare_and_write": false, 00:51:53.891 "copy": true, 00:51:53.891 "flush": true, 00:51:53.891 "get_zone_info": false, 00:51:53.891 "nvme_admin": false, 00:51:53.891 "nvme_io": false, 00:51:53.891 "nvme_io_md": false, 00:51:53.891 "nvme_iov_md": false, 00:51:53.891 "read": true, 00:51:53.891 "reset": true, 00:51:53.891 "seek_data": false, 00:51:53.891 "seek_hole": false, 00:51:53.891 "unmap": true, 00:51:53.891 "write": true, 00:51:53.891 "write_zeroes": true, 00:51:53.891 "zcopy": true, 00:51:53.891 "zone_append": false, 00:51:53.891 "zone_management": false 00:51:53.891 }, 00:51:53.891 "uuid": "f03a6ce4-4337-48ee-bc98-fd61459b3a91", 00:51:53.891 "zoned": false 00:51:53.891 }, 00:51:53.891 { 00:51:53.891 "aliases": [ 00:51:53.891 "49500f33-ed68-5ee0-9dae-1333066173b4" 00:51:53.891 ], 00:51:53.891 "assigned_rate_limits": { 00:51:53.891 "r_mbytes_per_sec": 0, 00:51:53.891 "rw_ios_per_sec": 0, 00:51:53.891 "rw_mbytes_per_sec": 0, 00:51:53.891 "w_mbytes_per_sec": 0 00:51:53.891 }, 00:51:53.891 "block_size": 512, 00:51:53.891 "claimed": false, 00:51:53.891 "driver_specific": { 00:51:53.891 "passthru": { 00:51:53.891 "base_bdev_name": "Malloc3", 00:51:53.891 "name": "Passthru0" 00:51:53.891 } 00:51:53.891 }, 00:51:53.891 "memory_domains": [ 00:51:53.891 { 00:51:53.891 "dma_device_id": "system", 00:51:53.891 "dma_device_type": 1 00:51:53.891 }, 00:51:53.891 { 00:51:53.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:51:53.891 "dma_device_type": 2 00:51:53.891 } 00:51:53.891 ], 00:51:53.891 "name": "Passthru0", 00:51:53.891 "num_blocks": 16384, 00:51:53.891 "product_name": "passthru", 00:51:53.891 "supported_io_types": { 00:51:53.891 "abort": true, 00:51:53.891 "compare": false, 00:51:53.891 "compare_and_write": false, 00:51:53.891 "copy": true, 00:51:53.891 "flush": true, 00:51:53.891 "get_zone_info": false, 00:51:53.891 "nvme_admin": false, 00:51:53.891 "nvme_io": false, 00:51:53.891 "nvme_io_md": false, 00:51:53.891 "nvme_iov_md": false, 00:51:53.891 "read": true, 00:51:53.891 "reset": true, 00:51:53.891 "seek_data": false, 00:51:53.891 "seek_hole": false, 00:51:53.891 "unmap": true, 00:51:53.891 "write": true, 00:51:53.891 "write_zeroes": true, 00:51:53.891 "zcopy": true, 00:51:53.891 "zone_append": false, 00:51:53.891 "zone_management": false 00:51:53.891 }, 00:51:53.891 "uuid": "49500f33-ed68-5ee0-9dae-1333066173b4", 00:51:53.891 "zoned": false 00:51:53.891 } 00:51:53.891 ]' 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:51:53.891 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:51:53.892 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:54.151 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:51:54.151 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:51:54.151 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:51:54.151 05:50:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:51:54.151 00:51:54.151 real 0m0.313s 00:51:54.151 user 0m0.215s 00:51:54.151 sys 0m0.032s 00:51:54.151 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:54.151 05:50:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:51:54.151 ************************************ 00:51:54.151 END TEST rpc_daemon_integrity 00:51:54.151 ************************************ 00:51:54.151 05:50:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:51:54.151 05:50:48 rpc -- rpc/rpc.sh@84 -- # killprocess 58395 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 58395 ']' 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@958 -- # kill -0 58395 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@959 -- # uname 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58395 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:51:54.151 killing process with pid 58395 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58395' 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@973 -- # kill 58395 00:51:54.151 05:50:48 rpc -- common/autotest_common.sh@978 -- # wait 58395 00:51:54.410 00:51:54.410 real 0m2.468s 00:51:54.410 user 0m3.368s 00:51:54.410 sys 0m0.640s 00:51:54.410 05:50:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:51:54.410 05:50:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:51:54.410 ************************************ 00:51:54.410 END TEST rpc 00:51:54.410 ************************************ 00:51:54.410 05:50:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:51:54.410 05:50:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:54.410 05:50:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:54.410 05:50:48 -- common/autotest_common.sh@10 -- # set +x 00:51:54.410 ************************************ 00:51:54.410 START TEST skip_rpc 00:51:54.410 ************************************ 00:51:54.410 05:50:48 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:51:54.410 * Looking for test storage... 00:51:54.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:51:54.410 05:50:48 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:51:54.410 05:50:48 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:51:54.410 05:50:48 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:51:54.670 05:50:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:51:54.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:54.670 --rc genhtml_branch_coverage=1 00:51:54.670 --rc genhtml_function_coverage=1 00:51:54.670 --rc genhtml_legend=1 00:51:54.670 --rc geninfo_all_blocks=1 00:51:54.670 --rc geninfo_unexecuted_blocks=1 00:51:54.670 00:51:54.670 ' 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:51:54.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:54.670 --rc genhtml_branch_coverage=1 00:51:54.670 --rc genhtml_function_coverage=1 00:51:54.670 --rc genhtml_legend=1 00:51:54.670 --rc geninfo_all_blocks=1 00:51:54.670 --rc geninfo_unexecuted_blocks=1 00:51:54.670 00:51:54.670 ' 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:51:54.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:54.670 --rc genhtml_branch_coverage=1 00:51:54.670 --rc genhtml_function_coverage=1 00:51:54.670 --rc genhtml_legend=1 00:51:54.670 --rc geninfo_all_blocks=1 00:51:54.670 --rc geninfo_unexecuted_blocks=1 00:51:54.670 00:51:54.670 ' 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:51:54.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:51:54.670 --rc genhtml_branch_coverage=1 00:51:54.670 --rc genhtml_function_coverage=1 00:51:54.670 --rc genhtml_legend=1 00:51:54.670 --rc geninfo_all_blocks=1 00:51:54.670 --rc geninfo_unexecuted_blocks=1 00:51:54.670 00:51:54.670 ' 00:51:54.670 05:50:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:51:54.670 05:50:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:51:54.670 05:50:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:51:54.670 05:50:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:51:54.670 ************************************ 00:51:54.670 START TEST skip_rpc 00:51:54.670 ************************************ 00:51:54.670 05:50:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:51:54.670 05:50:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58651 00:51:54.670 05:50:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:51:54.670 05:50:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:51:54.670 05:50:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:51:54.670 [2024-12-09 05:50:49.140611] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:51:54.670 [2024-12-09 05:50:49.140733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58651 ] 00:51:54.929 [2024-12-09 05:50:49.276205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:54.929 [2024-12-09 05:50:49.304028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:00.192 2024/12/09 05:50:54 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58651 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58651 ']' 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58651 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58651 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:00.192 killing process with pid 58651 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58651' 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58651 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58651 00:52:00.192 00:52:00.192 real 0m5.268s 00:52:00.192 user 0m5.018s 00:52:00.192 sys 0m0.164s 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:00.192 05:50:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:00.192 ************************************ 00:52:00.192 END TEST skip_rpc 00:52:00.192 ************************************ 00:52:00.192 05:50:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:52:00.192 05:50:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:00.192 05:50:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:00.192 05:50:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:00.192 ************************************ 00:52:00.192 START TEST skip_rpc_with_json 00:52:00.192 ************************************ 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58743 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58743 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58743 ']' 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:00.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:00.192 05:50:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:52:00.192 [2024-12-09 05:50:54.451153] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:00.192 [2024-12-09 05:50:54.451251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58743 ] 00:52:00.192 [2024-12-09 05:50:54.595369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:00.192 [2024-12-09 05:50:54.625451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:52:01.126 [2024-12-09 05:50:55.405457] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:52:01.126 2024/12/09 05:50:55 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:52:01.126 request: 00:52:01.126 { 00:52:01.126 "method": "nvmf_get_transports", 00:52:01.126 "params": { 00:52:01.126 "trtype": "tcp" 00:52:01.126 } 00:52:01.126 } 00:52:01.126 Got JSON-RPC error response 00:52:01.126 GoRPCClient: error on JSON-RPC call 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:52:01.126 [2024-12-09 05:50:55.417552] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:01.126 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:52:01.126 { 00:52:01.126 "subsystems": [ 00:52:01.126 { 00:52:01.126 "subsystem": "fsdev", 00:52:01.126 "config": [ 00:52:01.126 { 00:52:01.126 "method": "fsdev_set_opts", 00:52:01.126 "params": { 00:52:01.126 "fsdev_io_cache_size": 256, 00:52:01.126 "fsdev_io_pool_size": 65535 00:52:01.126 } 00:52:01.126 } 00:52:01.126 ] 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "subsystem": "keyring", 00:52:01.126 "config": [] 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "subsystem": "iobuf", 00:52:01.126 "config": [ 00:52:01.126 { 00:52:01.126 "method": "iobuf_set_options", 00:52:01.126 "params": { 00:52:01.126 "enable_numa": false, 00:52:01.126 "large_bufsize": 135168, 00:52:01.126 "large_pool_count": 1024, 00:52:01.126 "small_bufsize": 8192, 00:52:01.126 "small_pool_count": 8192 00:52:01.126 } 00:52:01.126 } 00:52:01.126 ] 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "subsystem": "sock", 00:52:01.126 "config": [ 00:52:01.126 { 00:52:01.126 "method": "sock_set_default_impl", 00:52:01.126 "params": { 00:52:01.126 "impl_name": "posix" 00:52:01.126 } 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "method": "sock_impl_set_options", 00:52:01.126 "params": { 00:52:01.126 "enable_ktls": false, 00:52:01.126 "enable_placement_id": 0, 00:52:01.126 "enable_quickack": false, 00:52:01.126 "enable_recv_pipe": true, 00:52:01.126 "enable_zerocopy_send_client": false, 00:52:01.126 "enable_zerocopy_send_server": true, 00:52:01.126 "impl_name": "ssl", 00:52:01.126 "recv_buf_size": 4096, 00:52:01.126 "send_buf_size": 4096, 00:52:01.126 "tls_version": 0, 00:52:01.126 "zerocopy_threshold": 0 00:52:01.126 } 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "method": "sock_impl_set_options", 00:52:01.126 "params": { 00:52:01.126 "enable_ktls": false, 00:52:01.126 "enable_placement_id": 0, 00:52:01.126 "enable_quickack": false, 00:52:01.126 "enable_recv_pipe": true, 00:52:01.126 "enable_zerocopy_send_client": false, 00:52:01.126 "enable_zerocopy_send_server": true, 00:52:01.126 "impl_name": "posix", 00:52:01.126 "recv_buf_size": 2097152, 00:52:01.126 "send_buf_size": 2097152, 00:52:01.126 "tls_version": 0, 00:52:01.126 "zerocopy_threshold": 0 00:52:01.126 } 00:52:01.126 } 00:52:01.126 ] 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "subsystem": "vmd", 00:52:01.126 "config": [] 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "subsystem": "accel", 00:52:01.126 "config": [ 00:52:01.126 { 00:52:01.126 "method": "accel_set_options", 00:52:01.126 "params": { 00:52:01.126 "buf_count": 2048, 00:52:01.126 "large_cache_size": 16, 00:52:01.126 "sequence_count": 2048, 00:52:01.126 "small_cache_size": 128, 00:52:01.126 "task_count": 2048 00:52:01.126 } 00:52:01.126 } 00:52:01.126 ] 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "subsystem": "bdev", 00:52:01.126 "config": [ 00:52:01.126 { 00:52:01.126 "method": "bdev_set_options", 00:52:01.126 "params": { 00:52:01.126 "bdev_auto_examine": true, 00:52:01.126 "bdev_io_cache_size": 256, 00:52:01.126 "bdev_io_pool_size": 65535, 00:52:01.126 "iobuf_large_cache_size": 16, 00:52:01.126 "iobuf_small_cache_size": 128 00:52:01.126 } 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "method": "bdev_raid_set_options", 00:52:01.126 "params": { 00:52:01.126 "process_max_bandwidth_mb_sec": 0, 00:52:01.126 "process_window_size_kb": 1024 00:52:01.126 } 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "method": "bdev_iscsi_set_options", 00:52:01.126 "params": { 00:52:01.126 "timeout_sec": 30 00:52:01.126 } 00:52:01.126 }, 00:52:01.126 { 00:52:01.126 "method": "bdev_nvme_set_options", 00:52:01.126 "params": { 00:52:01.126 "action_on_timeout": "none", 00:52:01.126 "allow_accel_sequence": false, 00:52:01.126 "arbitration_burst": 0, 00:52:01.126 "bdev_retry_count": 3, 00:52:01.126 "ctrlr_loss_timeout_sec": 0, 00:52:01.126 "delay_cmd_submit": true, 00:52:01.126 "dhchap_dhgroups": [ 00:52:01.126 "null", 00:52:01.126 "ffdhe2048", 00:52:01.126 "ffdhe3072", 00:52:01.126 "ffdhe4096", 00:52:01.126 "ffdhe6144", 00:52:01.126 "ffdhe8192" 00:52:01.126 ], 00:52:01.126 "dhchap_digests": [ 00:52:01.126 "sha256", 00:52:01.126 "sha384", 00:52:01.126 "sha512" 00:52:01.126 ], 00:52:01.126 "disable_auto_failback": false, 00:52:01.126 "fast_io_fail_timeout_sec": 0, 00:52:01.126 "generate_uuids": false, 00:52:01.126 "high_priority_weight": 0, 00:52:01.126 "io_path_stat": false, 00:52:01.126 "io_queue_requests": 0, 00:52:01.126 "keep_alive_timeout_ms": 10000, 00:52:01.126 "low_priority_weight": 0, 00:52:01.126 "medium_priority_weight": 0, 00:52:01.126 "nvme_adminq_poll_period_us": 10000, 00:52:01.126 "nvme_error_stat": false, 00:52:01.126 "nvme_ioq_poll_period_us": 0, 00:52:01.126 "rdma_cm_event_timeout_ms": 0, 00:52:01.127 "rdma_max_cq_size": 0, 00:52:01.127 "rdma_srq_size": 0, 00:52:01.127 "reconnect_delay_sec": 0, 00:52:01.127 "timeout_admin_us": 0, 00:52:01.127 "timeout_us": 0, 00:52:01.127 "transport_ack_timeout": 0, 00:52:01.127 "transport_retry_count": 4, 00:52:01.127 "transport_tos": 0 00:52:01.127 } 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "method": "bdev_nvme_set_hotplug", 00:52:01.127 "params": { 00:52:01.127 "enable": false, 00:52:01.127 "period_us": 100000 00:52:01.127 } 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "method": "bdev_wait_for_examine" 00:52:01.127 } 00:52:01.127 ] 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "subsystem": "scsi", 00:52:01.127 "config": null 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "subsystem": "scheduler", 00:52:01.127 "config": [ 00:52:01.127 { 00:52:01.127 "method": "framework_set_scheduler", 00:52:01.127 "params": { 00:52:01.127 "name": "static" 00:52:01.127 } 00:52:01.127 } 00:52:01.127 ] 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "subsystem": "vhost_scsi", 00:52:01.127 "config": [] 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "subsystem": "vhost_blk", 00:52:01.127 "config": [] 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "subsystem": "ublk", 00:52:01.127 "config": [] 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "subsystem": "nbd", 00:52:01.127 "config": [] 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "subsystem": "nvmf", 00:52:01.127 "config": [ 00:52:01.127 { 00:52:01.127 "method": "nvmf_set_config", 00:52:01.127 "params": { 00:52:01.127 "admin_cmd_passthru": { 00:52:01.127 "identify_ctrlr": false 00:52:01.127 }, 00:52:01.127 "dhchap_dhgroups": [ 00:52:01.127 "null", 00:52:01.127 "ffdhe2048", 00:52:01.127 "ffdhe3072", 00:52:01.127 "ffdhe4096", 00:52:01.127 "ffdhe6144", 00:52:01.127 "ffdhe8192" 00:52:01.127 ], 00:52:01.127 "dhchap_digests": [ 00:52:01.127 "sha256", 00:52:01.127 "sha384", 00:52:01.127 "sha512" 00:52:01.127 ], 00:52:01.127 "discovery_filter": "match_any" 00:52:01.127 } 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "method": "nvmf_set_max_subsystems", 00:52:01.127 "params": { 00:52:01.127 "max_subsystems": 1024 00:52:01.127 } 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "method": "nvmf_set_crdt", 00:52:01.127 "params": { 00:52:01.127 "crdt1": 0, 00:52:01.127 "crdt2": 0, 00:52:01.127 "crdt3": 0 00:52:01.127 } 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "method": "nvmf_create_transport", 00:52:01.127 "params": { 00:52:01.127 "abort_timeout_sec": 1, 00:52:01.127 "ack_timeout": 0, 00:52:01.127 "buf_cache_size": 4294967295, 00:52:01.127 "c2h_success": true, 00:52:01.127 "data_wr_pool_size": 0, 00:52:01.127 "dif_insert_or_strip": false, 00:52:01.127 "in_capsule_data_size": 4096, 00:52:01.127 "io_unit_size": 131072, 00:52:01.127 "max_aq_depth": 128, 00:52:01.127 "max_io_qpairs_per_ctrlr": 127, 00:52:01.127 "max_io_size": 131072, 00:52:01.127 "max_queue_depth": 128, 00:52:01.127 "num_shared_buffers": 511, 00:52:01.127 "sock_priority": 0, 00:52:01.127 "trtype": "TCP", 00:52:01.127 "zcopy": false 00:52:01.127 } 00:52:01.127 } 00:52:01.127 ] 00:52:01.127 }, 00:52:01.127 { 00:52:01.127 "subsystem": "iscsi", 00:52:01.127 "config": [ 00:52:01.127 { 00:52:01.127 "method": "iscsi_set_options", 00:52:01.127 "params": { 00:52:01.127 "allow_duplicated_isid": false, 00:52:01.127 "chap_group": 0, 00:52:01.127 "data_out_pool_size": 2048, 00:52:01.127 "default_time2retain": 20, 00:52:01.127 "default_time2wait": 2, 00:52:01.127 "disable_chap": false, 00:52:01.127 "error_recovery_level": 0, 00:52:01.127 "first_burst_length": 8192, 00:52:01.127 "immediate_data": true, 00:52:01.127 "immediate_data_pool_size": 16384, 00:52:01.127 "max_connections_per_session": 2, 00:52:01.127 "max_large_datain_per_connection": 64, 00:52:01.127 "max_queue_depth": 64, 00:52:01.127 "max_r2t_per_connection": 4, 00:52:01.127 "max_sessions": 128, 00:52:01.127 "mutual_chap": false, 00:52:01.127 "node_base": "iqn.2016-06.io.spdk", 00:52:01.127 "nop_in_interval": 30, 00:52:01.127 "nop_timeout": 60, 00:52:01.127 "pdu_pool_size": 36864, 00:52:01.127 "require_chap": false 00:52:01.127 } 00:52:01.127 } 00:52:01.127 ] 00:52:01.127 } 00:52:01.127 ] 00:52:01.127 } 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58743 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58743 ']' 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58743 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58743 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:01.127 killing process with pid 58743 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58743' 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58743 00:52:01.127 05:50:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58743 00:52:01.386 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58777 00:52:01.386 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:52:01.386 05:50:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58777 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58777 ']' 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58777 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58777 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58777' 00:52:06.668 killing process with pid 58777 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58777 00:52:06.668 05:51:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58777 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:52:06.668 00:52:06.668 real 0m6.728s 00:52:06.668 user 0m6.655s 00:52:06.668 sys 0m0.445s 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:06.668 ************************************ 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:52:06.668 END TEST skip_rpc_with_json 00:52:06.668 ************************************ 00:52:06.668 05:51:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:52:06.668 05:51:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:06.668 05:51:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:06.668 05:51:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:06.668 ************************************ 00:52:06.668 START TEST skip_rpc_with_delay 00:52:06.668 ************************************ 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:52:06.668 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:52:06.668 [2024-12-09 05:51:01.235206] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:52:06.928 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:52:06.928 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:06.928 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:06.928 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:06.928 00:52:06.928 real 0m0.095s 00:52:06.928 user 0m0.069s 00:52:06.928 sys 0m0.025s 00:52:06.928 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:06.928 05:51:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:52:06.928 ************************************ 00:52:06.928 END TEST skip_rpc_with_delay 00:52:06.928 ************************************ 00:52:06.928 05:51:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:52:06.928 05:51:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:52:06.928 05:51:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:52:06.928 05:51:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:06.928 05:51:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:06.928 05:51:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:06.928 ************************************ 00:52:06.928 START TEST exit_on_failed_rpc_init 00:52:06.928 ************************************ 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58887 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58887 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58887 ']' 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:06.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:06.928 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:52:06.928 [2024-12-09 05:51:01.384810] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:06.928 [2024-12-09 05:51:01.384901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58887 ] 00:52:07.187 [2024-12-09 05:51:01.532685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:07.187 [2024-12-09 05:51:01.566067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:52:07.187 05:51:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:52:07.446 [2024-12-09 05:51:01.796680] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:07.446 [2024-12-09 05:51:01.796781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58903 ] 00:52:07.446 [2024-12-09 05:51:01.949173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:07.446 [2024-12-09 05:51:01.987905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:07.446 [2024-12-09 05:51:01.988167] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:52:07.446 [2024-12-09 05:51:01.988278] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:52:07.447 [2024-12-09 05:51:01.988368] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58887 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58887 ']' 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58887 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58887 00:52:07.706 killing process with pid 58887 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58887' 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58887 00:52:07.706 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58887 00:52:07.965 ************************************ 00:52:07.965 END TEST exit_on_failed_rpc_init 00:52:07.965 ************************************ 00:52:07.965 00:52:07.965 real 0m1.007s 00:52:07.965 user 0m1.173s 00:52:07.965 sys 0m0.273s 00:52:07.965 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:07.965 05:51:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:52:07.965 05:51:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:52:07.965 00:52:07.965 real 0m13.484s 00:52:07.965 user 0m13.111s 00:52:07.965 sys 0m1.091s 00:52:07.965 05:51:02 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:07.965 ************************************ 00:52:07.965 05:51:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:07.965 END TEST skip_rpc 00:52:07.965 ************************************ 00:52:07.965 05:51:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:52:07.965 05:51:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:07.965 05:51:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:07.965 05:51:02 -- common/autotest_common.sh@10 -- # set +x 00:52:07.965 ************************************ 00:52:07.965 START TEST rpc_client 00:52:07.965 ************************************ 00:52:07.965 05:51:02 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:52:07.965 * Looking for test storage... 00:52:07.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:52:07.965 05:51:02 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:07.965 05:51:02 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:07.965 05:51:02 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:52:08.224 05:51:02 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:08.224 05:51:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:52:08.224 05:51:02 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:08.224 05:51:02 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:08.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:08.224 --rc genhtml_branch_coverage=1 00:52:08.224 --rc genhtml_function_coverage=1 00:52:08.224 --rc genhtml_legend=1 00:52:08.224 --rc geninfo_all_blocks=1 00:52:08.224 --rc geninfo_unexecuted_blocks=1 00:52:08.224 00:52:08.224 ' 00:52:08.224 05:51:02 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:08.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:08.224 --rc genhtml_branch_coverage=1 00:52:08.224 --rc genhtml_function_coverage=1 00:52:08.224 --rc genhtml_legend=1 00:52:08.224 --rc geninfo_all_blocks=1 00:52:08.224 --rc geninfo_unexecuted_blocks=1 00:52:08.224 00:52:08.224 ' 00:52:08.224 05:51:02 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:08.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:08.224 --rc genhtml_branch_coverage=1 00:52:08.224 --rc genhtml_function_coverage=1 00:52:08.224 --rc genhtml_legend=1 00:52:08.224 --rc geninfo_all_blocks=1 00:52:08.224 --rc geninfo_unexecuted_blocks=1 00:52:08.224 00:52:08.224 ' 00:52:08.224 05:51:02 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:08.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:08.224 --rc genhtml_branch_coverage=1 00:52:08.224 --rc genhtml_function_coverage=1 00:52:08.224 --rc genhtml_legend=1 00:52:08.224 --rc geninfo_all_blocks=1 00:52:08.225 --rc geninfo_unexecuted_blocks=1 00:52:08.225 00:52:08.225 ' 00:52:08.225 05:51:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:52:08.225 OK 00:52:08.225 05:51:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:52:08.225 00:52:08.225 real 0m0.210s 00:52:08.225 user 0m0.128s 00:52:08.225 sys 0m0.089s 00:52:08.225 ************************************ 00:52:08.225 END TEST rpc_client 00:52:08.225 ************************************ 00:52:08.225 05:51:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:08.225 05:51:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:52:08.225 05:51:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:52:08.225 05:51:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:08.225 05:51:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:08.225 05:51:02 -- common/autotest_common.sh@10 -- # set +x 00:52:08.225 ************************************ 00:52:08.225 START TEST json_config 00:52:08.225 ************************************ 00:52:08.225 05:51:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:52:08.225 05:51:02 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:08.225 05:51:02 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:52:08.225 05:51:02 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:08.483 05:51:02 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:08.483 05:51:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:08.483 05:51:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:08.483 05:51:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:08.483 05:51:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:52:08.483 05:51:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:52:08.483 05:51:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:52:08.483 05:51:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:52:08.483 05:51:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:52:08.483 05:51:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:52:08.483 05:51:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:52:08.483 05:51:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:08.483 05:51:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:52:08.483 05:51:02 json_config -- scripts/common.sh@345 -- # : 1 00:52:08.483 05:51:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:08.483 05:51:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:08.483 05:51:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:52:08.483 05:51:02 json_config -- scripts/common.sh@353 -- # local d=1 00:52:08.483 05:51:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:08.483 05:51:02 json_config -- scripts/common.sh@355 -- # echo 1 00:52:08.483 05:51:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:52:08.483 05:51:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:52:08.483 05:51:02 json_config -- scripts/common.sh@353 -- # local d=2 00:52:08.484 05:51:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:08.484 05:51:02 json_config -- scripts/common.sh@355 -- # echo 2 00:52:08.484 05:51:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:52:08.484 05:51:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:08.484 05:51:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:08.484 05:51:02 json_config -- scripts/common.sh@368 -- # return 0 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:08.484 --rc genhtml_branch_coverage=1 00:52:08.484 --rc genhtml_function_coverage=1 00:52:08.484 --rc genhtml_legend=1 00:52:08.484 --rc geninfo_all_blocks=1 00:52:08.484 --rc geninfo_unexecuted_blocks=1 00:52:08.484 00:52:08.484 ' 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:08.484 --rc genhtml_branch_coverage=1 00:52:08.484 --rc genhtml_function_coverage=1 00:52:08.484 --rc genhtml_legend=1 00:52:08.484 --rc geninfo_all_blocks=1 00:52:08.484 --rc geninfo_unexecuted_blocks=1 00:52:08.484 00:52:08.484 ' 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:08.484 --rc genhtml_branch_coverage=1 00:52:08.484 --rc genhtml_function_coverage=1 00:52:08.484 --rc genhtml_legend=1 00:52:08.484 --rc geninfo_all_blocks=1 00:52:08.484 --rc geninfo_unexecuted_blocks=1 00:52:08.484 00:52:08.484 ' 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:08.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:08.484 --rc genhtml_branch_coverage=1 00:52:08.484 --rc genhtml_function_coverage=1 00:52:08.484 --rc genhtml_legend=1 00:52:08.484 --rc geninfo_all_blocks=1 00:52:08.484 --rc geninfo_unexecuted_blocks=1 00:52:08.484 00:52:08.484 ' 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:08.484 05:51:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:52:08.484 05:51:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:08.484 05:51:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:08.484 05:51:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:08.484 05:51:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:08.484 05:51:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:08.484 05:51:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:08.484 05:51:02 json_config -- paths/export.sh@5 -- # export PATH 00:52:08.484 05:51:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@51 -- # : 0 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:52:08.484 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:52:08.484 05:51:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:52:08.484 INFO: JSON configuration test init 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:08.484 05:51:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:52:08.484 05:51:02 json_config -- json_config/common.sh@9 -- # local app=target 00:52:08.484 05:51:02 json_config -- json_config/common.sh@10 -- # shift 00:52:08.484 Waiting for target to run... 00:52:08.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:52:08.484 05:51:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:52:08.484 05:51:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:52:08.484 05:51:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:52:08.484 05:51:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:52:08.484 05:51:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:52:08.484 05:51:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59037 00:52:08.484 05:51:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:52:08.484 05:51:02 json_config -- json_config/common.sh@25 -- # waitforlisten 59037 /var/tmp/spdk_tgt.sock 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 59037 ']' 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:08.484 05:51:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:52:08.484 05:51:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:08.484 [2024-12-09 05:51:02.936157] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:08.485 [2024-12-09 05:51:02.936252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59037 ] 00:52:08.742 [2024-12-09 05:51:03.258537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:08.742 [2024-12-09 05:51:03.279610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:09.674 00:52:09.674 05:51:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:09.674 05:51:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:52:09.674 05:51:03 json_config -- json_config/common.sh@26 -- # echo '' 00:52:09.674 05:51:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:52:09.674 05:51:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:52:09.674 05:51:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:09.674 05:51:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:09.674 05:51:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:52:09.674 05:51:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:52:09.674 05:51:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:09.674 05:51:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:09.674 05:51:04 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:52:09.674 05:51:04 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:52:09.674 05:51:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:52:09.933 05:51:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:09.933 05:51:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:52:09.933 05:51:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:52:09.933 05:51:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@51 -- # local get_types 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@54 -- # sort 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:52:10.499 05:51:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:10.499 05:51:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@62 -- # return 0 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:52:10.499 05:51:04 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:10.499 05:51:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:52:10.499 05:51:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:52:10.499 05:51:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:52:10.499 MallocForNvmf0 00:52:10.499 05:51:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:52:10.499 05:51:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:52:10.757 MallocForNvmf1 00:52:11.014 05:51:05 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:52:11.014 05:51:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:52:11.271 [2024-12-09 05:51:05.641387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:11.271 05:51:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:52:11.271 05:51:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:52:11.528 05:51:05 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:52:11.528 05:51:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:52:11.785 05:51:06 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:52:11.785 05:51:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:52:12.043 05:51:06 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:52:12.043 05:51:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:52:12.043 [2024-12-09 05:51:06.613926] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:52:12.301 05:51:06 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:52:12.301 05:51:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:12.301 05:51:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:12.301 05:51:06 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:52:12.301 05:51:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:12.301 05:51:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:12.301 05:51:06 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:52:12.301 05:51:06 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:52:12.301 05:51:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:52:12.559 MallocBdevForConfigChangeCheck 00:52:12.559 05:51:07 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:52:12.559 05:51:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:12.559 05:51:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:12.559 05:51:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:52:12.559 05:51:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:52:13.127 INFO: shutting down applications... 00:52:13.127 05:51:07 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:52:13.127 05:51:07 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:52:13.127 05:51:07 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:52:13.127 05:51:07 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:52:13.127 05:51:07 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:52:13.387 Calling clear_iscsi_subsystem 00:52:13.387 Calling clear_nvmf_subsystem 00:52:13.387 Calling clear_nbd_subsystem 00:52:13.387 Calling clear_ublk_subsystem 00:52:13.387 Calling clear_vhost_blk_subsystem 00:52:13.387 Calling clear_vhost_scsi_subsystem 00:52:13.387 Calling clear_bdev_subsystem 00:52:13.387 05:51:07 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:52:13.387 05:51:07 json_config -- json_config/json_config.sh@350 -- # count=100 00:52:13.387 05:51:07 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:52:13.387 05:51:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:52:13.387 05:51:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:52:13.387 05:51:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:52:13.954 05:51:08 json_config -- json_config/json_config.sh@352 -- # break 00:52:13.954 05:51:08 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:52:13.954 05:51:08 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:52:13.954 05:51:08 json_config -- json_config/common.sh@31 -- # local app=target 00:52:13.954 05:51:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:52:13.954 05:51:08 json_config -- json_config/common.sh@35 -- # [[ -n 59037 ]] 00:52:13.954 05:51:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59037 00:52:13.954 05:51:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:52:13.954 05:51:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:52:13.954 05:51:08 json_config -- json_config/common.sh@41 -- # kill -0 59037 00:52:13.954 05:51:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:52:14.214 05:51:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:52:14.214 05:51:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:52:14.214 05:51:08 json_config -- json_config/common.sh@41 -- # kill -0 59037 00:52:14.214 05:51:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:52:14.214 05:51:08 json_config -- json_config/common.sh@43 -- # break 00:52:14.214 05:51:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:52:14.214 SPDK target shutdown done 00:52:14.214 05:51:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:52:14.214 INFO: relaunching applications... 00:52:14.214 05:51:08 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:52:14.214 05:51:08 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:52:14.214 05:51:08 json_config -- json_config/common.sh@9 -- # local app=target 00:52:14.214 05:51:08 json_config -- json_config/common.sh@10 -- # shift 00:52:14.214 05:51:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:52:14.214 05:51:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:52:14.214 05:51:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:52:14.214 05:51:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:52:14.214 05:51:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:52:14.214 05:51:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59319 00:52:14.214 Waiting for target to run... 00:52:14.214 05:51:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:52:14.214 05:51:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:52:14.214 05:51:08 json_config -- json_config/common.sh@25 -- # waitforlisten 59319 /var/tmp/spdk_tgt.sock 00:52:14.214 05:51:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 59319 ']' 00:52:14.214 05:51:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:52:14.214 05:51:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:14.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:52:14.214 05:51:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:52:14.214 05:51:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:14.214 05:51:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:14.474 [2024-12-09 05:51:08.828116] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:14.474 [2024-12-09 05:51:08.828213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59319 ] 00:52:14.733 [2024-12-09 05:51:09.138670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:14.733 [2024-12-09 05:51:09.160902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:14.993 [2024-12-09 05:51:09.475503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:14.993 [2024-12-09 05:51:09.507560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:52:15.263 05:51:09 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:15.263 05:51:09 json_config -- common/autotest_common.sh@868 -- # return 0 00:52:15.263 00:52:15.263 05:51:09 json_config -- json_config/common.sh@26 -- # echo '' 00:52:15.263 05:51:09 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:52:15.263 INFO: Checking if target configuration is the same... 00:52:15.263 05:51:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:52:15.263 05:51:09 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:52:15.263 05:51:09 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:52:15.263 05:51:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:52:15.263 + '[' 2 -ne 2 ']' 00:52:15.263 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:52:15.263 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:52:15.263 + rootdir=/home/vagrant/spdk_repo/spdk 00:52:15.263 +++ basename /dev/fd/62 00:52:15.557 ++ mktemp /tmp/62.XXX 00:52:15.557 + tmp_file_1=/tmp/62.k3n 00:52:15.557 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:52:15.557 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:52:15.557 + tmp_file_2=/tmp/spdk_tgt_config.json.4gC 00:52:15.557 + ret=0 00:52:15.557 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:52:15.820 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:52:15.820 + diff -u /tmp/62.k3n /tmp/spdk_tgt_config.json.4gC 00:52:15.820 INFO: JSON config files are the same 00:52:15.820 + echo 'INFO: JSON config files are the same' 00:52:15.820 + rm /tmp/62.k3n /tmp/spdk_tgt_config.json.4gC 00:52:15.820 + exit 0 00:52:15.820 05:51:10 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:52:15.820 INFO: changing configuration and checking if this can be detected... 00:52:15.820 05:51:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:52:15.820 05:51:10 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:52:15.820 05:51:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:52:16.079 05:51:10 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:52:16.079 05:51:10 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:52:16.079 05:51:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:52:16.079 + '[' 2 -ne 2 ']' 00:52:16.079 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:52:16.079 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:52:16.079 + rootdir=/home/vagrant/spdk_repo/spdk 00:52:16.079 +++ basename /dev/fd/62 00:52:16.079 ++ mktemp /tmp/62.XXX 00:52:16.079 + tmp_file_1=/tmp/62.JBg 00:52:16.079 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:52:16.079 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:52:16.079 + tmp_file_2=/tmp/spdk_tgt_config.json.xm6 00:52:16.079 + ret=0 00:52:16.079 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:52:16.338 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:52:16.597 + diff -u /tmp/62.JBg /tmp/spdk_tgt_config.json.xm6 00:52:16.597 + ret=1 00:52:16.597 + echo '=== Start of file: /tmp/62.JBg ===' 00:52:16.597 + cat /tmp/62.JBg 00:52:16.597 + echo '=== End of file: /tmp/62.JBg ===' 00:52:16.597 + echo '' 00:52:16.597 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xm6 ===' 00:52:16.597 + cat /tmp/spdk_tgt_config.json.xm6 00:52:16.597 + echo '=== End of file: /tmp/spdk_tgt_config.json.xm6 ===' 00:52:16.597 + echo '' 00:52:16.597 + rm /tmp/62.JBg /tmp/spdk_tgt_config.json.xm6 00:52:16.597 + exit 1 00:52:16.597 INFO: configuration change detected. 00:52:16.597 05:51:10 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:52:16.597 05:51:10 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:52:16.597 05:51:10 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:52:16.597 05:51:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:16.597 05:51:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:16.597 05:51:10 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:52:16.597 05:51:10 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:52:16.597 05:51:10 json_config -- json_config/json_config.sh@324 -- # [[ -n 59319 ]] 00:52:16.597 05:51:10 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:52:16.597 05:51:10 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:52:16.597 05:51:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:16.597 05:51:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:16.598 05:51:10 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:52:16.598 05:51:10 json_config -- json_config/json_config.sh@200 -- # uname -s 00:52:16.598 05:51:10 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:52:16.598 05:51:10 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:52:16.598 05:51:11 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:52:16.598 05:51:11 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:16.598 05:51:11 json_config -- json_config/json_config.sh@330 -- # killprocess 59319 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@954 -- # '[' -z 59319 ']' 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@958 -- # kill -0 59319 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@959 -- # uname 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59319 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:16.598 killing process with pid 59319 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59319' 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@973 -- # kill 59319 00:52:16.598 05:51:11 json_config -- common/autotest_common.sh@978 -- # wait 59319 00:52:16.856 05:51:11 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:52:16.856 05:51:11 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:52:16.856 05:51:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:52:16.856 05:51:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:16.856 05:51:11 json_config -- json_config/json_config.sh@335 -- # return 0 00:52:16.856 INFO: Success 00:52:16.856 05:51:11 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:52:16.856 ************************************ 00:52:16.856 END TEST json_config 00:52:16.856 ************************************ 00:52:16.856 00:52:16.856 real 0m8.598s 00:52:16.856 user 0m12.622s 00:52:16.856 sys 0m1.518s 00:52:16.856 05:51:11 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:16.856 05:51:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:52:16.856 05:51:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:52:16.856 05:51:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:16.856 05:51:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:16.856 05:51:11 -- common/autotest_common.sh@10 -- # set +x 00:52:16.856 ************************************ 00:52:16.856 START TEST json_config_extra_key 00:52:16.856 ************************************ 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:16.856 05:51:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:16.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:16.856 --rc genhtml_branch_coverage=1 00:52:16.856 --rc genhtml_function_coverage=1 00:52:16.856 --rc genhtml_legend=1 00:52:16.856 --rc geninfo_all_blocks=1 00:52:16.856 --rc geninfo_unexecuted_blocks=1 00:52:16.856 00:52:16.856 ' 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:16.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:16.856 --rc genhtml_branch_coverage=1 00:52:16.856 --rc genhtml_function_coverage=1 00:52:16.856 --rc genhtml_legend=1 00:52:16.856 --rc geninfo_all_blocks=1 00:52:16.856 --rc geninfo_unexecuted_blocks=1 00:52:16.856 00:52:16.856 ' 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:16.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:16.856 --rc genhtml_branch_coverage=1 00:52:16.856 --rc genhtml_function_coverage=1 00:52:16.856 --rc genhtml_legend=1 00:52:16.856 --rc geninfo_all_blocks=1 00:52:16.856 --rc geninfo_unexecuted_blocks=1 00:52:16.856 00:52:16.856 ' 00:52:16.856 05:51:11 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:16.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:16.856 --rc genhtml_branch_coverage=1 00:52:16.856 --rc genhtml_function_coverage=1 00:52:16.856 --rc genhtml_legend=1 00:52:16.856 --rc geninfo_all_blocks=1 00:52:16.856 --rc geninfo_unexecuted_blocks=1 00:52:16.856 00:52:16.856 ' 00:52:16.856 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:16.856 05:51:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:17.116 05:51:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:52:17.116 05:51:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:17.116 05:51:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:17.116 05:51:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:17.116 05:51:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:17.116 05:51:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:17.116 05:51:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:17.116 05:51:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:52:17.116 05:51:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:52:17.116 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:52:17.116 05:51:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:52:17.116 INFO: launching applications... 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:52:17.116 05:51:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:52:17.116 05:51:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:52:17.116 05:51:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:52:17.116 05:51:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:52:17.117 05:51:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:52:17.117 05:51:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:52:17.117 05:51:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:52:17.117 05:51:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:52:17.117 05:51:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59503 00:52:17.117 05:51:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:52:17.117 Waiting for target to run... 00:52:17.117 05:51:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:52:17.117 05:51:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59503 /var/tmp/spdk_tgt.sock 00:52:17.117 05:51:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59503 ']' 00:52:17.117 05:51:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:52:17.117 05:51:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:17.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:52:17.117 05:51:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:52:17.117 05:51:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:17.117 05:51:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:52:17.117 [2024-12-09 05:51:11.507028] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:17.117 [2024-12-09 05:51:11.507155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59503 ] 00:52:17.376 [2024-12-09 05:51:11.795729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:17.376 [2024-12-09 05:51:11.816863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:18.312 05:51:12 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:18.312 05:51:12 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:52:18.312 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:52:18.312 INFO: shutting down applications... 00:52:18.312 05:51:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:52:18.312 05:51:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59503 ]] 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59503 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59503 00:52:18.312 05:51:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:52:18.571 05:51:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:52:18.571 05:51:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:52:18.571 05:51:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59503 00:52:18.571 05:51:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:52:18.571 05:51:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:52:18.571 05:51:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:52:18.571 SPDK target shutdown done 00:52:18.571 05:51:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:52:18.571 Success 00:52:18.571 05:51:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:52:18.571 00:52:18.571 real 0m1.758s 00:52:18.571 user 0m1.686s 00:52:18.571 sys 0m0.288s 00:52:18.571 05:51:13 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:18.571 05:51:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:52:18.571 ************************************ 00:52:18.571 END TEST json_config_extra_key 00:52:18.571 ************************************ 00:52:18.571 05:51:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:52:18.571 05:51:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:18.571 05:51:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:18.571 05:51:13 -- common/autotest_common.sh@10 -- # set +x 00:52:18.571 ************************************ 00:52:18.571 START TEST alias_rpc 00:52:18.571 ************************************ 00:52:18.571 05:51:13 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:52:18.831 * Looking for test storage... 00:52:18.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:52:18.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:18.831 05:51:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:18.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:18.831 --rc genhtml_branch_coverage=1 00:52:18.831 --rc genhtml_function_coverage=1 00:52:18.831 --rc genhtml_legend=1 00:52:18.831 --rc geninfo_all_blocks=1 00:52:18.831 --rc geninfo_unexecuted_blocks=1 00:52:18.831 00:52:18.831 ' 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:18.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:18.831 --rc genhtml_branch_coverage=1 00:52:18.831 --rc genhtml_function_coverage=1 00:52:18.831 --rc genhtml_legend=1 00:52:18.831 --rc geninfo_all_blocks=1 00:52:18.831 --rc geninfo_unexecuted_blocks=1 00:52:18.831 00:52:18.831 ' 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:18.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:18.831 --rc genhtml_branch_coverage=1 00:52:18.831 --rc genhtml_function_coverage=1 00:52:18.831 --rc genhtml_legend=1 00:52:18.831 --rc geninfo_all_blocks=1 00:52:18.831 --rc geninfo_unexecuted_blocks=1 00:52:18.831 00:52:18.831 ' 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:18.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:18.831 --rc genhtml_branch_coverage=1 00:52:18.831 --rc genhtml_function_coverage=1 00:52:18.831 --rc genhtml_legend=1 00:52:18.831 --rc geninfo_all_blocks=1 00:52:18.831 --rc geninfo_unexecuted_blocks=1 00:52:18.831 00:52:18.831 ' 00:52:18.831 05:51:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:52:18.831 05:51:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59593 00:52:18.831 05:51:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59593 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59593 ']' 00:52:18.831 05:51:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:18.831 05:51:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:18.831 [2024-12-09 05:51:13.375043] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:18.831 [2024-12-09 05:51:13.375148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59593 ] 00:52:19.091 [2024-12-09 05:51:13.515386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:19.091 [2024-12-09 05:51:13.544099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:19.349 05:51:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:19.349 05:51:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:52:19.349 05:51:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:52:19.608 05:51:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59593 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59593 ']' 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59593 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59593 00:52:19.608 killing process with pid 59593 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59593' 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 59593 00:52:19.608 05:51:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 59593 00:52:19.877 ************************************ 00:52:19.877 END TEST alias_rpc 00:52:19.877 ************************************ 00:52:19.877 00:52:19.877 real 0m1.148s 00:52:19.877 user 0m1.352s 00:52:19.877 sys 0m0.320s 00:52:19.877 05:51:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:19.877 05:51:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:19.877 05:51:14 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:52:19.877 05:51:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:52:19.877 05:51:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:19.877 05:51:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:19.877 05:51:14 -- common/autotest_common.sh@10 -- # set +x 00:52:19.877 ************************************ 00:52:19.877 START TEST dpdk_mem_utility 00:52:19.877 ************************************ 00:52:19.877 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:52:19.877 * Looking for test storage... 00:52:19.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:52:19.877 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:19.877 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:52:19.877 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:20.154 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:52:20.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:20.154 05:51:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:52:20.154 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:20.154 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:20.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:20.154 --rc genhtml_branch_coverage=1 00:52:20.154 --rc genhtml_function_coverage=1 00:52:20.154 --rc genhtml_legend=1 00:52:20.154 --rc geninfo_all_blocks=1 00:52:20.154 --rc geninfo_unexecuted_blocks=1 00:52:20.154 00:52:20.154 ' 00:52:20.154 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:20.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:20.154 --rc genhtml_branch_coverage=1 00:52:20.154 --rc genhtml_function_coverage=1 00:52:20.154 --rc genhtml_legend=1 00:52:20.154 --rc geninfo_all_blocks=1 00:52:20.154 --rc geninfo_unexecuted_blocks=1 00:52:20.154 00:52:20.154 ' 00:52:20.155 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:20.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:20.155 --rc genhtml_branch_coverage=1 00:52:20.155 --rc genhtml_function_coverage=1 00:52:20.155 --rc genhtml_legend=1 00:52:20.155 --rc geninfo_all_blocks=1 00:52:20.155 --rc geninfo_unexecuted_blocks=1 00:52:20.155 00:52:20.155 ' 00:52:20.155 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:20.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:20.155 --rc genhtml_branch_coverage=1 00:52:20.155 --rc genhtml_function_coverage=1 00:52:20.155 --rc genhtml_legend=1 00:52:20.155 --rc geninfo_all_blocks=1 00:52:20.155 --rc geninfo_unexecuted_blocks=1 00:52:20.155 00:52:20.155 ' 00:52:20.155 05:51:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:52:20.155 05:51:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59674 00:52:20.155 05:51:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59674 00:52:20.155 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59674 ']' 00:52:20.155 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:20.155 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:20.155 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:20.155 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:20.155 05:51:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:52:20.155 05:51:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:52:20.155 [2024-12-09 05:51:14.541919] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:20.155 [2024-12-09 05:51:14.542339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59674 ] 00:52:20.155 [2024-12-09 05:51:14.680418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:20.155 [2024-12-09 05:51:14.711082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:21.091 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:21.091 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:52:21.091 05:51:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:52:21.091 05:51:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:52:21.091 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:21.091 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:52:21.091 { 00:52:21.091 "filename": "/tmp/spdk_mem_dump.txt" 00:52:21.091 } 00:52:21.091 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:21.091 05:51:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:52:21.091 DPDK memory size 818.000000 MiB in 1 heap(s) 00:52:21.091 1 heaps totaling size 818.000000 MiB 00:52:21.091 size: 818.000000 MiB heap id: 0 00:52:21.091 end heaps---------- 00:52:21.091 9 mempools totaling size 603.782043 MiB 00:52:21.091 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:52:21.091 size: 158.602051 MiB name: PDU_data_out_Pool 00:52:21.091 size: 100.555481 MiB name: bdev_io_59674 00:52:21.091 size: 50.003479 MiB name: msgpool_59674 00:52:21.091 size: 36.509338 MiB name: fsdev_io_59674 00:52:21.091 size: 21.763794 MiB name: PDU_Pool 00:52:21.091 size: 19.513306 MiB name: SCSI_TASK_Pool 00:52:21.091 size: 4.133484 MiB name: evtpool_59674 00:52:21.091 size: 0.026123 MiB name: Session_Pool 00:52:21.091 end mempools------- 00:52:21.091 6 memzones totaling size 4.142822 MiB 00:52:21.091 size: 1.000366 MiB name: RG_ring_0_59674 00:52:21.091 size: 1.000366 MiB name: RG_ring_1_59674 00:52:21.091 size: 1.000366 MiB name: RG_ring_4_59674 00:52:21.091 size: 1.000366 MiB name: RG_ring_5_59674 00:52:21.091 size: 0.125366 MiB name: RG_ring_2_59674 00:52:21.091 size: 0.015991 MiB name: RG_ring_3_59674 00:52:21.091 end memzones------- 00:52:21.091 05:51:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:52:21.091 heap id: 0 total size: 818.000000 MiB number of busy elements: 237 number of free elements: 15 00:52:21.091 list of free elements. size: 10.817139 MiB 00:52:21.091 element at address: 0x200019200000 with size: 0.999878 MiB 00:52:21.091 element at address: 0x200019400000 with size: 0.999878 MiB 00:52:21.091 element at address: 0x200000400000 with size: 0.996338 MiB 00:52:21.091 element at address: 0x200032000000 with size: 0.994446 MiB 00:52:21.091 element at address: 0x200006400000 with size: 0.959839 MiB 00:52:21.091 element at address: 0x200012c00000 with size: 0.944275 MiB 00:52:21.091 element at address: 0x200019600000 with size: 0.936584 MiB 00:52:21.091 element at address: 0x200000200000 with size: 0.717346 MiB 00:52:21.091 element at address: 0x20001ae00000 with size: 0.571533 MiB 00:52:21.091 element at address: 0x200000c00000 with size: 0.490845 MiB 00:52:21.091 element at address: 0x20000a600000 with size: 0.489441 MiB 00:52:21.091 element at address: 0x200019800000 with size: 0.485657 MiB 00:52:21.091 element at address: 0x200003e00000 with size: 0.481018 MiB 00:52:21.092 element at address: 0x200028200000 with size: 0.396667 MiB 00:52:21.092 element at address: 0x200000800000 with size: 0.353394 MiB 00:52:21.092 list of standard malloc elements. size: 199.253967 MiB 00:52:21.092 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:52:21.092 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:52:21.092 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:52:21.092 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:52:21.092 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:52:21.092 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:52:21.092 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:52:21.092 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:52:21.092 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:52:21.092 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000085a780 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000085a980 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f080 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f140 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f200 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f380 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f440 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f500 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000087f680 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000cff000 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200003efb980 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:52:21.092 element at address: 0x2000282658c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x200028265980 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826c580 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826c780 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826c840 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826c900 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d080 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d140 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d200 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d380 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d440 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d500 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d680 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d740 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d800 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826d980 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826da40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826db00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826de00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826df80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e040 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e100 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e280 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e340 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e400 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e580 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e640 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e700 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e880 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826e940 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f000 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f180 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f240 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f300 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f480 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f540 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f600 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f780 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f840 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f900 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:52:21.092 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:52:21.092 list of memzone associated elements. size: 607.928894 MiB 00:52:21.092 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:52:21.092 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:52:21.093 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:52:21.093 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:52:21.093 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:52:21.093 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59674_0 00:52:21.093 element at address: 0x200000dff380 with size: 48.003052 MiB 00:52:21.093 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59674_0 00:52:21.093 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:52:21.093 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59674_0 00:52:21.093 element at address: 0x2000199be940 with size: 20.255554 MiB 00:52:21.093 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:52:21.093 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:52:21.093 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:52:21.093 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:52:21.093 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59674_0 00:52:21.093 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:52:21.093 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59674 00:52:21.093 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:52:21.093 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59674 00:52:21.093 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:52:21.093 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:52:21.093 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:52:21.093 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:52:21.093 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:52:21.093 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:52:21.093 element at address: 0x200003efba40 with size: 1.008118 MiB 00:52:21.093 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:52:21.093 element at address: 0x200000cff180 with size: 1.000488 MiB 00:52:21.093 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59674 00:52:21.093 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:52:21.093 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59674 00:52:21.093 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:52:21.093 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59674 00:52:21.093 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:52:21.093 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59674 00:52:21.093 element at address: 0x20000087f740 with size: 0.500488 MiB 00:52:21.093 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59674 00:52:21.093 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:52:21.093 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59674 00:52:21.093 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:52:21.093 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:52:21.093 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:52:21.093 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:52:21.093 element at address: 0x20001987c540 with size: 0.250488 MiB 00:52:21.093 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:52:21.093 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:52:21.093 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59674 00:52:21.093 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:52:21.093 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59674 00:52:21.093 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:52:21.093 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:52:21.093 element at address: 0x200028265a40 with size: 0.023743 MiB 00:52:21.093 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:52:21.093 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:52:21.093 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59674 00:52:21.093 element at address: 0x20002826bb80 with size: 0.002441 MiB 00:52:21.093 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:52:21.093 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:52:21.093 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59674 00:52:21.093 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:52:21.093 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59674 00:52:21.093 element at address: 0x20000085a840 with size: 0.000305 MiB 00:52:21.093 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59674 00:52:21.093 element at address: 0x20002826c640 with size: 0.000305 MiB 00:52:21.093 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:52:21.093 05:51:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:52:21.093 05:51:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59674 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59674 ']' 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59674 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59674 00:52:21.093 killing process with pid 59674 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59674' 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59674 00:52:21.093 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59674 00:52:21.351 00:52:21.351 real 0m1.526s 00:52:21.351 user 0m1.745s 00:52:21.351 sys 0m0.313s 00:52:21.351 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:21.351 05:51:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:52:21.351 ************************************ 00:52:21.351 END TEST dpdk_mem_utility 00:52:21.351 ************************************ 00:52:21.351 05:51:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:52:21.351 05:51:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:21.351 05:51:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:21.351 05:51:15 -- common/autotest_common.sh@10 -- # set +x 00:52:21.351 ************************************ 00:52:21.351 START TEST event 00:52:21.351 ************************************ 00:52:21.351 05:51:15 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:52:21.609 * Looking for test storage... 00:52:21.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:52:21.609 05:51:15 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:21.609 05:51:15 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:21.609 05:51:15 event -- common/autotest_common.sh@1711 -- # lcov --version 00:52:21.609 05:51:16 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:21.609 05:51:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:21.609 05:51:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:21.609 05:51:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:21.609 05:51:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:52:21.609 05:51:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:52:21.609 05:51:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:52:21.609 05:51:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:52:21.609 05:51:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:52:21.609 05:51:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:52:21.609 05:51:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:52:21.609 05:51:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:21.609 05:51:16 event -- scripts/common.sh@344 -- # case "$op" in 00:52:21.609 05:51:16 event -- scripts/common.sh@345 -- # : 1 00:52:21.609 05:51:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:21.609 05:51:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:21.609 05:51:16 event -- scripts/common.sh@365 -- # decimal 1 00:52:21.609 05:51:16 event -- scripts/common.sh@353 -- # local d=1 00:52:21.609 05:51:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:21.609 05:51:16 event -- scripts/common.sh@355 -- # echo 1 00:52:21.609 05:51:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:52:21.609 05:51:16 event -- scripts/common.sh@366 -- # decimal 2 00:52:21.609 05:51:16 event -- scripts/common.sh@353 -- # local d=2 00:52:21.609 05:51:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:21.609 05:51:16 event -- scripts/common.sh@355 -- # echo 2 00:52:21.609 05:51:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:52:21.609 05:51:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:21.609 05:51:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:21.609 05:51:16 event -- scripts/common.sh@368 -- # return 0 00:52:21.609 05:51:16 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:21.609 05:51:16 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:21.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:21.609 --rc genhtml_branch_coverage=1 00:52:21.609 --rc genhtml_function_coverage=1 00:52:21.609 --rc genhtml_legend=1 00:52:21.609 --rc geninfo_all_blocks=1 00:52:21.609 --rc geninfo_unexecuted_blocks=1 00:52:21.609 00:52:21.609 ' 00:52:21.609 05:51:16 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:21.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:21.609 --rc genhtml_branch_coverage=1 00:52:21.609 --rc genhtml_function_coverage=1 00:52:21.609 --rc genhtml_legend=1 00:52:21.609 --rc geninfo_all_blocks=1 00:52:21.609 --rc geninfo_unexecuted_blocks=1 00:52:21.609 00:52:21.609 ' 00:52:21.609 05:51:16 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:21.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:21.609 --rc genhtml_branch_coverage=1 00:52:21.609 --rc genhtml_function_coverage=1 00:52:21.609 --rc genhtml_legend=1 00:52:21.609 --rc geninfo_all_blocks=1 00:52:21.609 --rc geninfo_unexecuted_blocks=1 00:52:21.609 00:52:21.609 ' 00:52:21.609 05:51:16 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:21.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:21.609 --rc genhtml_branch_coverage=1 00:52:21.609 --rc genhtml_function_coverage=1 00:52:21.609 --rc genhtml_legend=1 00:52:21.609 --rc geninfo_all_blocks=1 00:52:21.609 --rc geninfo_unexecuted_blocks=1 00:52:21.609 00:52:21.609 ' 00:52:21.609 05:51:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:52:21.609 05:51:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:52:21.609 05:51:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:52:21.609 05:51:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:52:21.609 05:51:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:21.609 05:51:16 event -- common/autotest_common.sh@10 -- # set +x 00:52:21.609 ************************************ 00:52:21.609 START TEST event_perf 00:52:21.609 ************************************ 00:52:21.609 05:51:16 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:52:21.609 Running I/O for 1 seconds...[2024-12-09 05:51:16.107893] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:21.609 [2024-12-09 05:51:16.108132] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59770 ] 00:52:21.868 [2024-12-09 05:51:16.254078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:52:21.868 [2024-12-09 05:51:16.285171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:21.868 [2024-12-09 05:51:16.285300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:21.868 [2024-12-09 05:51:16.285402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:21.868 Running I/O for 1 seconds...[2024-12-09 05:51:16.285404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:52:22.801 00:52:22.801 lcore 0: 198052 00:52:22.801 lcore 1: 198052 00:52:22.801 lcore 2: 198052 00:52:22.801 lcore 3: 198051 00:52:22.801 done. 00:52:22.801 00:52:22.801 ************************************ 00:52:22.801 END TEST event_perf 00:52:22.801 ************************************ 00:52:22.801 real 0m1.234s 00:52:22.802 user 0m4.072s 00:52:22.802 sys 0m0.037s 00:52:22.802 05:51:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:22.802 05:51:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:52:22.802 05:51:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:52:22.802 05:51:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:52:22.802 05:51:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:22.802 05:51:17 event -- common/autotest_common.sh@10 -- # set +x 00:52:22.802 ************************************ 00:52:22.802 START TEST event_reactor 00:52:22.802 ************************************ 00:52:22.802 05:51:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:52:23.059 [2024-12-09 05:51:17.391400] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:23.059 [2024-12-09 05:51:17.391633] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59810 ] 00:52:23.059 [2024-12-09 05:51:17.534623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:23.059 [2024-12-09 05:51:17.561113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:24.435 test_start 00:52:24.435 oneshot 00:52:24.435 tick 100 00:52:24.435 tick 100 00:52:24.435 tick 250 00:52:24.435 tick 100 00:52:24.435 tick 100 00:52:24.435 tick 250 00:52:24.435 tick 100 00:52:24.435 tick 500 00:52:24.435 tick 100 00:52:24.435 tick 100 00:52:24.436 tick 250 00:52:24.436 tick 100 00:52:24.436 tick 100 00:52:24.436 test_end 00:52:24.436 00:52:24.436 real 0m1.224s 00:52:24.436 user 0m1.078s 00:52:24.436 sys 0m0.042s 00:52:24.436 05:51:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:24.436 05:51:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:52:24.436 ************************************ 00:52:24.436 END TEST event_reactor 00:52:24.436 ************************************ 00:52:24.436 05:51:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:52:24.436 05:51:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:52:24.436 05:51:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:24.436 05:51:18 event -- common/autotest_common.sh@10 -- # set +x 00:52:24.436 ************************************ 00:52:24.436 START TEST event_reactor_perf 00:52:24.436 ************************************ 00:52:24.436 05:51:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:52:24.436 [2024-12-09 05:51:18.664958] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:24.436 [2024-12-09 05:51:18.665062] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:52:24.436 [2024-12-09 05:51:18.804626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:24.436 [2024-12-09 05:51:18.834839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:25.373 test_start 00:52:25.373 test_end 00:52:25.373 Performance: 474930 events per second 00:52:25.373 00:52:25.373 real 0m1.221s 00:52:25.373 user 0m1.083s 00:52:25.373 sys 0m0.034s 00:52:25.373 05:51:19 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:25.373 ************************************ 00:52:25.373 END TEST event_reactor_perf 00:52:25.373 ************************************ 00:52:25.373 05:51:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:52:25.373 05:51:19 event -- event/event.sh@49 -- # uname -s 00:52:25.373 05:51:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:52:25.373 05:51:19 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:52:25.373 05:51:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:25.373 05:51:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:25.373 05:51:19 event -- common/autotest_common.sh@10 -- # set +x 00:52:25.373 ************************************ 00:52:25.373 START TEST event_scheduler 00:52:25.373 ************************************ 00:52:25.373 05:51:19 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:52:25.632 * Looking for test storage... 00:52:25.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:25.633 05:51:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:25.633 --rc genhtml_branch_coverage=1 00:52:25.633 --rc genhtml_function_coverage=1 00:52:25.633 --rc genhtml_legend=1 00:52:25.633 --rc geninfo_all_blocks=1 00:52:25.633 --rc geninfo_unexecuted_blocks=1 00:52:25.633 00:52:25.633 ' 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:25.633 --rc genhtml_branch_coverage=1 00:52:25.633 --rc genhtml_function_coverage=1 00:52:25.633 --rc genhtml_legend=1 00:52:25.633 --rc geninfo_all_blocks=1 00:52:25.633 --rc geninfo_unexecuted_blocks=1 00:52:25.633 00:52:25.633 ' 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:25.633 --rc genhtml_branch_coverage=1 00:52:25.633 --rc genhtml_function_coverage=1 00:52:25.633 --rc genhtml_legend=1 00:52:25.633 --rc geninfo_all_blocks=1 00:52:25.633 --rc geninfo_unexecuted_blocks=1 00:52:25.633 00:52:25.633 ' 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:25.633 --rc genhtml_branch_coverage=1 00:52:25.633 --rc genhtml_function_coverage=1 00:52:25.633 --rc genhtml_legend=1 00:52:25.633 --rc geninfo_all_blocks=1 00:52:25.633 --rc geninfo_unexecuted_blocks=1 00:52:25.633 00:52:25.633 ' 00:52:25.633 05:51:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:52:25.633 05:51:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59905 00:52:25.633 05:51:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:52:25.633 05:51:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:52:25.633 05:51:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59905 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59905 ']' 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:25.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:25.633 05:51:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:52:25.633 [2024-12-09 05:51:20.169521] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:25.633 [2024-12-09 05:51:20.169617] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59905 ] 00:52:25.892 [2024-12-09 05:51:20.320435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:52:25.892 [2024-12-09 05:51:20.386359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:25.892 [2024-12-09 05:51:20.386521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:25.892 [2024-12-09 05:51:20.386672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:52:25.892 [2024-12-09 05:51:20.386678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:52:26.152 05:51:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:52:26.152 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:52:26.152 POWER: Cannot set governor of lcore 0 to userspace 00:52:26.152 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:52:26.152 POWER: Cannot set governor of lcore 0 to performance 00:52:26.152 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:52:26.152 POWER: Cannot set governor of lcore 0 to userspace 00:52:26.152 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:52:26.152 POWER: Cannot set governor of lcore 0 to userspace 00:52:26.152 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:52:26.152 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:52:26.152 POWER: Unable to set Power Management Environment for lcore 0 00:52:26.152 [2024-12-09 05:51:20.487605] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:52:26.152 [2024-12-09 05:51:20.488071] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:52:26.152 [2024-12-09 05:51:20.488277] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:52:26.152 [2024-12-09 05:51:20.488733] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:52:26.152 [2024-12-09 05:51:20.488971] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:52:26.152 [2024-12-09 05:51:20.489427] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.152 05:51:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:52:26.152 [2024-12-09 05:51:20.550623] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.152 05:51:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:26.152 05:51:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:52:26.152 ************************************ 00:52:26.152 START TEST scheduler_create_thread 00:52:26.152 ************************************ 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.152 2 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.152 3 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.152 4 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.152 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.152 5 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.153 6 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.153 7 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.153 8 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.153 9 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.153 10 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:26.153 05:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:27.531 05:51:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:27.790 05:51:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:52:27.790 05:51:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:52:27.790 05:51:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:27.790 05:51:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:28.729 05:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:28.729 00:52:28.729 real 0m2.612s 00:52:28.729 user 0m0.014s 00:52:28.729 sys 0m0.003s 00:52:28.729 05:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:28.729 05:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:52:28.729 ************************************ 00:52:28.729 END TEST scheduler_create_thread 00:52:28.729 ************************************ 00:52:28.729 05:51:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:52:28.729 05:51:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59905 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59905 ']' 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59905 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59905 00:52:28.729 killing process with pid 59905 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59905' 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59905 00:52:28.729 05:51:23 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59905 00:52:29.297 [2024-12-09 05:51:23.653973] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:52:29.297 00:52:29.297 real 0m3.863s 00:52:29.297 user 0m5.802s 00:52:29.297 sys 0m0.340s 00:52:29.297 05:51:23 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:29.297 05:51:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:52:29.297 ************************************ 00:52:29.297 END TEST event_scheduler 00:52:29.297 ************************************ 00:52:29.297 05:51:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:52:29.297 05:51:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:52:29.297 05:51:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:29.297 05:51:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:29.297 05:51:23 event -- common/autotest_common.sh@10 -- # set +x 00:52:29.297 ************************************ 00:52:29.297 START TEST app_repeat 00:52:29.297 ************************************ 00:52:29.297 05:51:23 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60009 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:52:29.297 Process app_repeat pid: 60009 00:52:29.297 spdk_app_start Round 0 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60009' 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:52:29.297 05:51:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60009 /var/tmp/spdk-nbd.sock 00:52:29.297 05:51:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60009 ']' 00:52:29.297 05:51:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:52:29.297 05:51:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:29.297 05:51:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:52:29.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:52:29.297 05:51:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:29.297 05:51:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:52:29.297 [2024-12-09 05:51:23.875991] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:29.297 [2024-12-09 05:51:23.876085] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60009 ] 00:52:29.557 [2024-12-09 05:51:24.020557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:52:29.557 [2024-12-09 05:51:24.050027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:29.557 [2024-12-09 05:51:24.050035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:29.816 05:51:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:29.816 05:51:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:52:29.816 05:51:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:52:29.816 Malloc0 00:52:29.816 05:51:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:52:30.076 Malloc1 00:52:30.076 05:51:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:30.076 05:51:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:52:30.336 /dev/nbd0 00:52:30.336 05:51:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:52:30.336 05:51:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:52:30.336 1+0 records in 00:52:30.336 1+0 records out 00:52:30.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341834 s, 12.0 MB/s 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:52:30.336 05:51:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:30.595 05:51:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:52:30.595 05:51:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:52:30.595 05:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:52:30.595 05:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:30.595 05:51:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:52:30.854 /dev/nbd1 00:52:30.854 05:51:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:52:30.854 05:51:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:52:30.854 1+0 records in 00:52:30.854 1+0 records out 00:52:30.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233957 s, 17.5 MB/s 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:52:30.854 05:51:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:52:30.854 05:51:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:52:30.854 05:51:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:30.854 05:51:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:52:30.854 05:51:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:30.854 05:51:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:52:31.113 { 00:52:31.113 "bdev_name": "Malloc0", 00:52:31.113 "nbd_device": "/dev/nbd0" 00:52:31.113 }, 00:52:31.113 { 00:52:31.113 "bdev_name": "Malloc1", 00:52:31.113 "nbd_device": "/dev/nbd1" 00:52:31.113 } 00:52:31.113 ]' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:52:31.113 { 00:52:31.113 "bdev_name": "Malloc0", 00:52:31.113 "nbd_device": "/dev/nbd0" 00:52:31.113 }, 00:52:31.113 { 00:52:31.113 "bdev_name": "Malloc1", 00:52:31.113 "nbd_device": "/dev/nbd1" 00:52:31.113 } 00:52:31.113 ]' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:52:31.113 /dev/nbd1' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:52:31.113 /dev/nbd1' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:52:31.113 256+0 records in 00:52:31.113 256+0 records out 00:52:31.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00958937 s, 109 MB/s 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:52:31.113 256+0 records in 00:52:31.113 256+0 records out 00:52:31.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238849 s, 43.9 MB/s 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:52:31.113 256+0 records in 00:52:31.113 256+0 records out 00:52:31.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235863 s, 44.5 MB/s 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:52:31.113 05:51:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:52:31.372 05:51:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:52:31.632 05:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:52:31.632 05:51:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:31.891 05:51:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:52:32.150 05:51:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:52:32.150 05:51:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:52:32.410 05:51:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:52:32.410 [2024-12-09 05:51:26.919830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:52:32.410 [2024-12-09 05:51:26.945901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:32.410 [2024-12-09 05:51:26.945911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:32.410 [2024-12-09 05:51:26.973410] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:52:32.410 [2024-12-09 05:51:26.973483] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:52:35.700 05:51:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:52:35.700 spdk_app_start Round 1 00:52:35.700 05:51:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:52:35.700 05:51:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60009 /var/tmp/spdk-nbd.sock 00:52:35.700 05:51:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60009 ']' 00:52:35.700 05:51:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:52:35.700 05:51:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:35.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:52:35.700 05:51:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:52:35.700 05:51:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:35.700 05:51:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:52:35.700 05:51:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:35.700 05:51:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:52:35.700 05:51:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:52:35.958 Malloc0 00:52:35.958 05:51:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:52:36.217 Malloc1 00:52:36.217 05:51:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:36.217 05:51:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:52:36.475 /dev/nbd0 00:52:36.475 05:51:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:52:36.475 05:51:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:52:36.475 1+0 records in 00:52:36.475 1+0 records out 00:52:36.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324265 s, 12.6 MB/s 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:52:36.475 05:51:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:52:36.475 05:51:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:52:36.475 05:51:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:36.476 05:51:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:52:36.734 /dev/nbd1 00:52:36.734 05:51:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:52:36.734 05:51:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:52:36.734 1+0 records in 00:52:36.734 1+0 records out 00:52:36.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245821 s, 16.7 MB/s 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:52:36.734 05:51:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:52:36.734 05:51:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:52:36.734 05:51:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:36.734 05:51:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:52:36.734 05:51:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:36.734 05:51:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:52:36.992 { 00:52:36.992 "bdev_name": "Malloc0", 00:52:36.992 "nbd_device": "/dev/nbd0" 00:52:36.992 }, 00:52:36.992 { 00:52:36.992 "bdev_name": "Malloc1", 00:52:36.992 "nbd_device": "/dev/nbd1" 00:52:36.992 } 00:52:36.992 ]' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:52:36.992 { 00:52:36.992 "bdev_name": "Malloc0", 00:52:36.992 "nbd_device": "/dev/nbd0" 00:52:36.992 }, 00:52:36.992 { 00:52:36.992 "bdev_name": "Malloc1", 00:52:36.992 "nbd_device": "/dev/nbd1" 00:52:36.992 } 00:52:36.992 ]' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:52:36.992 /dev/nbd1' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:52:36.992 /dev/nbd1' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:52:36.992 256+0 records in 00:52:36.992 256+0 records out 00:52:36.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00803051 s, 131 MB/s 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:52:36.992 256+0 records in 00:52:36.992 256+0 records out 00:52:36.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219779 s, 47.7 MB/s 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:52:36.992 256+0 records in 00:52:36.992 256+0 records out 00:52:36.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255984 s, 41.0 MB/s 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:52:36.992 05:51:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:52:37.250 05:51:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:52:37.508 05:51:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:52:37.766 05:51:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:37.767 05:51:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:52:38.025 05:51:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:52:38.025 05:51:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:52:38.284 05:51:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:52:38.543 [2024-12-09 05:51:32.938619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:52:38.543 [2024-12-09 05:51:32.965067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:38.543 [2024-12-09 05:51:32.965078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:38.543 [2024-12-09 05:51:32.993182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:52:38.543 [2024-12-09 05:51:32.993256] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:52:41.830 05:51:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:52:41.830 spdk_app_start Round 2 00:52:41.830 05:51:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:52:41.830 05:51:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60009 /var/tmp/spdk-nbd.sock 00:52:41.830 05:51:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60009 ']' 00:52:41.830 05:51:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:52:41.830 05:51:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:41.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:52:41.830 05:51:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:52:41.830 05:51:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:41.830 05:51:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:52:41.830 05:51:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:41.830 05:51:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:52:41.830 05:51:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:52:41.830 Malloc0 00:52:41.830 05:51:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:52:42.089 Malloc1 00:52:42.089 05:51:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:42.089 05:51:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:52:42.348 /dev/nbd0 00:52:42.348 05:51:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:52:42.348 05:51:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:52:42.348 1+0 records in 00:52:42.348 1+0 records out 00:52:42.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284932 s, 14.4 MB/s 00:52:42.348 05:51:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:42.607 05:51:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:52:42.607 05:51:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:42.607 05:51:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:52:42.607 05:51:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:52:42.607 05:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:52:42.607 05:51:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:42.607 05:51:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:52:42.866 /dev/nbd1 00:52:42.866 05:51:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:52:42.866 05:51:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:52:42.866 1+0 records in 00:52:42.866 1+0 records out 00:52:42.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373197 s, 11.0 MB/s 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:52:42.866 05:51:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:52:42.866 05:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:52:42.866 05:51:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:52:42.866 05:51:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:52:42.866 05:51:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:42.866 05:51:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:52:43.125 { 00:52:43.125 "bdev_name": "Malloc0", 00:52:43.125 "nbd_device": "/dev/nbd0" 00:52:43.125 }, 00:52:43.125 { 00:52:43.125 "bdev_name": "Malloc1", 00:52:43.125 "nbd_device": "/dev/nbd1" 00:52:43.125 } 00:52:43.125 ]' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:52:43.125 { 00:52:43.125 "bdev_name": "Malloc0", 00:52:43.125 "nbd_device": "/dev/nbd0" 00:52:43.125 }, 00:52:43.125 { 00:52:43.125 "bdev_name": "Malloc1", 00:52:43.125 "nbd_device": "/dev/nbd1" 00:52:43.125 } 00:52:43.125 ]' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:52:43.125 /dev/nbd1' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:52:43.125 /dev/nbd1' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:52:43.125 256+0 records in 00:52:43.125 256+0 records out 00:52:43.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00678865 s, 154 MB/s 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:52:43.125 256+0 records in 00:52:43.125 256+0 records out 00:52:43.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222653 s, 47.1 MB/s 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:52:43.125 256+0 records in 00:52:43.125 256+0 records out 00:52:43.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282965 s, 37.1 MB/s 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:52:43.125 05:51:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:52:43.126 05:51:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:52:43.126 05:51:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:52:43.385 05:51:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:52:43.644 05:51:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:52:43.902 05:51:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:52:43.902 05:51:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:52:44.161 05:51:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:52:44.419 [2024-12-09 05:51:38.798497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:52:44.419 [2024-12-09 05:51:38.829240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:44.419 [2024-12-09 05:51:38.829250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:44.419 [2024-12-09 05:51:38.858628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:52:44.419 [2024-12-09 05:51:38.858896] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:52:47.788 05:51:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60009 /var/tmp/spdk-nbd.sock 00:52:47.788 05:51:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60009 ']' 00:52:47.788 05:51:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:52:47.788 05:51:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:47.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:52:47.788 05:51:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:52:47.788 05:51:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:47.788 05:51:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:52:47.788 05:51:42 event.app_repeat -- event/event.sh@39 -- # killprocess 60009 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60009 ']' 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60009 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60009 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60009' 00:52:47.788 killing process with pid 60009 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60009 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60009 00:52:47.788 spdk_app_start is called in Round 0. 00:52:47.788 Shutdown signal received, stop current app iteration 00:52:47.788 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 00:52:47.788 spdk_app_start is called in Round 1. 00:52:47.788 Shutdown signal received, stop current app iteration 00:52:47.788 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 00:52:47.788 spdk_app_start is called in Round 2. 00:52:47.788 Shutdown signal received, stop current app iteration 00:52:47.788 Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 reinitialization... 00:52:47.788 spdk_app_start is called in Round 3. 00:52:47.788 Shutdown signal received, stop current app iteration 00:52:47.788 05:51:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:52:47.788 05:51:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:52:47.788 00:52:47.788 real 0m18.303s 00:52:47.788 user 0m41.933s 00:52:47.788 sys 0m2.596s 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:47.788 05:51:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:52:47.788 ************************************ 00:52:47.788 END TEST app_repeat 00:52:47.788 ************************************ 00:52:47.788 05:51:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:52:47.788 05:51:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:52:47.788 05:51:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:47.788 05:51:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:47.788 05:51:42 event -- common/autotest_common.sh@10 -- # set +x 00:52:47.788 ************************************ 00:52:47.788 START TEST cpu_locks 00:52:47.788 ************************************ 00:52:47.788 05:51:42 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:52:47.788 * Looking for test storage... 00:52:47.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:52:47.788 05:51:42 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:52:47.788 05:51:42 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:52:47.788 05:51:42 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:52:47.788 05:51:42 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:52:47.788 05:51:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:52:48.099 05:51:42 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:52:48.099 05:51:42 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:52:48.099 05:51:42 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:52:48.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:48.099 --rc genhtml_branch_coverage=1 00:52:48.099 --rc genhtml_function_coverage=1 00:52:48.099 --rc genhtml_legend=1 00:52:48.099 --rc geninfo_all_blocks=1 00:52:48.099 --rc geninfo_unexecuted_blocks=1 00:52:48.099 00:52:48.099 ' 00:52:48.099 05:51:42 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:52:48.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:48.099 --rc genhtml_branch_coverage=1 00:52:48.099 --rc genhtml_function_coverage=1 00:52:48.099 --rc genhtml_legend=1 00:52:48.099 --rc geninfo_all_blocks=1 00:52:48.099 --rc geninfo_unexecuted_blocks=1 00:52:48.099 00:52:48.099 ' 00:52:48.099 05:51:42 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:52:48.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:48.099 --rc genhtml_branch_coverage=1 00:52:48.099 --rc genhtml_function_coverage=1 00:52:48.099 --rc genhtml_legend=1 00:52:48.099 --rc geninfo_all_blocks=1 00:52:48.099 --rc geninfo_unexecuted_blocks=1 00:52:48.099 00:52:48.099 ' 00:52:48.099 05:51:42 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:52:48.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:52:48.099 --rc genhtml_branch_coverage=1 00:52:48.099 --rc genhtml_function_coverage=1 00:52:48.099 --rc genhtml_legend=1 00:52:48.099 --rc geninfo_all_blocks=1 00:52:48.099 --rc geninfo_unexecuted_blocks=1 00:52:48.099 00:52:48.099 ' 00:52:48.099 05:51:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:52:48.099 05:51:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:52:48.099 05:51:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:52:48.100 05:51:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:52:48.100 05:51:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:48.100 05:51:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:48.100 05:51:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:52:48.100 ************************************ 00:52:48.100 START TEST default_locks 00:52:48.100 ************************************ 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60629 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60629 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60629 ']' 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:48.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:48.100 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:52:48.100 [2024-12-09 05:51:42.467443] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:48.100 [2024-12-09 05:51:42.468305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60629 ] 00:52:48.100 [2024-12-09 05:51:42.612906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:48.100 [2024-12-09 05:51:42.640392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:48.358 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:48.358 05:51:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:52:48.358 05:51:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60629 00:52:48.358 05:51:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:52:48.358 05:51:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60629 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60629 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60629 ']' 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60629 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60629 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:48.924 killing process with pid 60629 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60629' 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60629 00:52:48.924 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60629 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60629 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60629 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60629 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60629 ']' 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:49.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:52:49.182 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60629) - No such process 00:52:49.182 ERROR: process (pid: 60629) is no longer running 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:52:49.182 00:52:49.182 real 0m1.136s 00:52:49.182 user 0m1.172s 00:52:49.182 sys 0m0.442s 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:49.182 05:51:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:52:49.182 ************************************ 00:52:49.182 END TEST default_locks 00:52:49.182 ************************************ 00:52:49.182 05:51:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:52:49.182 05:51:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:49.182 05:51:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:49.182 05:51:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:52:49.182 ************************************ 00:52:49.182 START TEST default_locks_via_rpc 00:52:49.182 ************************************ 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60674 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60674 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60674 ']' 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:49.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:49.182 05:51:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:49.182 [2024-12-09 05:51:43.653969] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:49.182 [2024-12-09 05:51:43.654103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60674 ] 00:52:49.440 [2024-12-09 05:51:43.792252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:49.440 [2024-12-09 05:51:43.823059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60674 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60674 00:52:50.374 05:51:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60674 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60674 ']' 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60674 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60674 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:50.642 killing process with pid 60674 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60674' 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60674 00:52:50.642 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60674 00:52:50.902 00:52:50.902 real 0m1.748s 00:52:50.902 user 0m2.028s 00:52:50.902 sys 0m0.449s 00:52:50.902 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:50.902 ************************************ 00:52:50.902 END TEST default_locks_via_rpc 00:52:50.902 ************************************ 00:52:50.902 05:51:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:52:50.902 05:51:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:52:50.902 05:51:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:50.902 05:51:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:50.902 05:51:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:52:50.902 ************************************ 00:52:50.902 START TEST non_locking_app_on_locked_coremask 00:52:50.902 ************************************ 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60743 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60743 /var/tmp/spdk.sock 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60743 ']' 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:50.902 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:50.902 [2024-12-09 05:51:45.459527] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:50.902 [2024-12-09 05:51:45.459627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:52:51.161 [2024-12-09 05:51:45.602966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:51.161 [2024-12-09 05:51:45.631533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60752 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60752 /var/tmp/spdk2.sock 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60752 ']' 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:51.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:51.418 05:51:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:51.418 [2024-12-09 05:51:45.843683] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:51.418 [2024-12-09 05:51:45.843793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60752 ] 00:52:51.418 [2024-12-09 05:51:45.997583] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:52:51.418 [2024-12-09 05:51:45.997631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:51.677 [2024-12-09 05:51:46.058429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:52.612 05:51:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:52.612 05:51:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:52:52.612 05:51:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60743 00:52:52.612 05:51:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60743 00:52:52.612 05:51:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60743 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60743 ']' 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60743 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60743 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60743' 00:52:53.180 killing process with pid 60743 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60743 00:52:53.180 05:51:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60743 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60752 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60752 ']' 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60752 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60752 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:53.749 killing process with pid 60752 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60752' 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60752 00:52:53.749 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60752 00:52:54.008 00:52:54.008 real 0m2.987s 00:52:54.008 user 0m3.523s 00:52:54.008 sys 0m0.890s 00:52:54.008 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:54.008 05:51:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:54.008 ************************************ 00:52:54.008 END TEST non_locking_app_on_locked_coremask 00:52:54.008 ************************************ 00:52:54.008 05:51:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:52:54.008 05:51:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:54.008 05:51:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:54.008 05:51:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:52:54.008 ************************************ 00:52:54.008 START TEST locking_app_on_unlocked_coremask 00:52:54.008 ************************************ 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60831 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60831 /var/tmp/spdk.sock 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60831 ']' 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:54.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:54.009 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:54.009 [2024-12-09 05:51:48.494581] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:54.009 [2024-12-09 05:51:48.494695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60831 ] 00:52:54.267 [2024-12-09 05:51:48.638140] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:52:54.267 [2024-12-09 05:51:48.638192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:54.267 [2024-12-09 05:51:48.666532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60840 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60840 /var/tmp/spdk2.sock 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60840 ']' 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:54.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:54.267 05:51:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:54.526 [2024-12-09 05:51:48.891172] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:54.526 [2024-12-09 05:51:48.891301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60840 ] 00:52:54.526 [2024-12-09 05:51:49.044348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:54.784 [2024-12-09 05:51:49.109832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:55.351 05:51:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:55.352 05:51:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:52:55.352 05:51:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60840 00:52:55.352 05:51:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60840 00:52:55.352 05:51:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60831 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60831 ']' 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60831 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60831 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:56.289 killing process with pid 60831 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60831' 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60831 00:52:56.289 05:51:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60831 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60840 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60840 ']' 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60840 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60840 00:52:56.856 killing process with pid 60840 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60840' 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60840 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60840 00:52:56.856 ************************************ 00:52:56.856 END TEST locking_app_on_unlocked_coremask 00:52:56.856 ************************************ 00:52:56.856 00:52:56.856 real 0m2.999s 00:52:56.856 user 0m3.555s 00:52:56.856 sys 0m0.888s 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:56.856 05:51:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:57.114 05:51:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:52:57.114 05:51:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:57.114 05:51:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:57.114 05:51:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:52:57.114 ************************************ 00:52:57.114 START TEST locking_app_on_locked_coremask 00:52:57.114 ************************************ 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:52:57.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60919 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60919 /var/tmp/spdk.sock 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60919 ']' 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:57.114 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:57.114 [2024-12-09 05:51:51.547539] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:57.114 [2024-12-09 05:51:51.547835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60919 ] 00:52:57.114 [2024-12-09 05:51:51.693304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:57.372 [2024-12-09 05:51:51.722845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60928 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60928 /var/tmp/spdk2.sock 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60928 /var/tmp/spdk2.sock 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60928 /var/tmp/spdk2.sock 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60928 ']' 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:57.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:57.372 05:51:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:57.373 [2024-12-09 05:51:51.947291] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:57.373 [2024-12-09 05:51:51.947397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60928 ] 00:52:57.630 [2024-12-09 05:51:52.101992] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60919 has claimed it. 00:52:57.630 [2024-12-09 05:51:52.102077] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:52:58.196 ERROR: process (pid: 60928) is no longer running 00:52:58.196 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60928) - No such process 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60919 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60919 00:52:58.196 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60919 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60919 ']' 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60919 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60919 00:52:58.456 killing process with pid 60919 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60919' 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60919 00:52:58.456 05:51:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60919 00:52:58.714 00:52:58.714 real 0m1.694s 00:52:58.714 user 0m2.020s 00:52:58.714 sys 0m0.437s 00:52:58.714 05:51:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:52:58.714 ************************************ 00:52:58.714 END TEST locking_app_on_locked_coremask 00:52:58.714 ************************************ 00:52:58.714 05:51:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:58.714 05:51:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:52:58.714 05:51:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:52:58.714 05:51:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:52:58.714 05:51:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:52:58.714 ************************************ 00:52:58.714 START TEST locking_overlapped_coremask 00:52:58.715 ************************************ 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60980 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60980 /var/tmp/spdk.sock 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60980 ']' 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:58.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:58.715 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:58.715 [2024-12-09 05:51:53.287943] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:58.715 [2024-12-09 05:51:53.288224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60980 ] 00:52:58.972 [2024-12-09 05:51:53.431601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:52:58.972 [2024-12-09 05:51:53.465331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:52:58.972 [2024-12-09 05:51:53.465478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:52:58.972 [2024-12-09 05:51:53.465700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60996 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60996 /var/tmp/spdk2.sock 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60996 /var/tmp/spdk2.sock 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60996 /var/tmp/spdk2.sock 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60996 ']' 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:52:59.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:52:59.230 05:51:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:52:59.230 [2024-12-09 05:51:53.706196] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:52:59.230 [2024-12-09 05:51:53.706291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60996 ] 00:52:59.491 [2024-12-09 05:51:53.867168] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60980 has claimed it. 00:52:59.491 [2024-12-09 05:51:53.867212] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:53:00.056 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60996) - No such process 00:53:00.056 ERROR: process (pid: 60996) is no longer running 00:53:00.056 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60980 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60980 ']' 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60980 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60980 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:00.057 killing process with pid 60980 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60980' 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60980 00:53:00.057 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60980 00:53:00.315 00:53:00.315 real 0m1.468s 00:53:00.315 user 0m4.039s 00:53:00.315 sys 0m0.307s 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:53:00.315 ************************************ 00:53:00.315 END TEST locking_overlapped_coremask 00:53:00.315 ************************************ 00:53:00.315 05:51:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:53:00.315 05:51:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:00.315 05:51:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:00.315 05:51:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:53:00.315 ************************************ 00:53:00.315 START TEST locking_overlapped_coremask_via_rpc 00:53:00.315 ************************************ 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61042 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61042 /var/tmp/spdk.sock 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61042 ']' 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:00.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:00.315 05:51:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:00.315 [2024-12-09 05:51:54.804814] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:53:00.315 [2024-12-09 05:51:54.804932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61042 ] 00:53:00.574 [2024-12-09 05:51:54.943916] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:53:00.574 [2024-12-09 05:51:54.944095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:00.574 [2024-12-09 05:51:54.977985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:00.574 [2024-12-09 05:51:54.977859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:00.574 [2024-12-09 05:51:54.977976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61072 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61072 /var/tmp/spdk2.sock 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61072 ']' 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:53:01.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:01.509 05:51:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.509 [2024-12-09 05:51:55.799085] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:53:01.509 [2024-12-09 05:51:55.799567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61072 ] 00:53:01.509 [2024-12-09 05:51:55.948938] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:53:01.509 [2024-12-09 05:51:55.948988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:01.509 [2024-12-09 05:51:56.014383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:01.509 [2024-12-09 05:51:56.017790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:01.509 [2024-12-09 05:51:56.017791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:01.768 [2024-12-09 05:51:56.333830] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61042 has claimed it. 00:53:01.768 2024/12/09 05:51:56 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:53:01.768 request: 00:53:01.768 { 00:53:01.768 "method": "framework_enable_cpumask_locks", 00:53:01.768 "params": {} 00:53:01.768 } 00:53:01.768 Got JSON-RPC error response 00:53:01.768 GoRPCClient: error on JSON-RPC call 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61042 /var/tmp/spdk.sock 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61042 ']' 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:01.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:01.768 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61072 /var/tmp/spdk2.sock 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61072 ']' 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:02.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:02.026 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:02.594 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:02.594 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:53:02.594 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:53:02.594 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:53:02.594 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:53:02.594 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:53:02.594 00:53:02.594 real 0m2.160s 00:53:02.594 user 0m1.166s 00:53:02.594 sys 0m0.181s 00:53:02.594 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:02.594 05:51:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:53:02.594 ************************************ 00:53:02.594 END TEST locking_overlapped_coremask_via_rpc 00:53:02.594 ************************************ 00:53:02.594 05:51:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:53:02.594 05:51:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61042 ]] 00:53:02.594 05:51:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61042 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61042 ']' 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61042 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61042 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61042' 00:53:02.594 killing process with pid 61042 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61042 00:53:02.594 05:51:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61042 00:53:02.853 05:51:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61072 ]] 00:53:02.853 05:51:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61072 00:53:02.853 05:51:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61072 ']' 00:53:02.853 05:51:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61072 00:53:02.853 05:51:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:53:02.853 05:51:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:02.853 05:51:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61072 00:53:02.853 05:51:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:53:02.853 05:51:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:53:02.853 killing process with pid 61072 00:53:02.853 05:51:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61072' 00:53:02.854 05:51:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61072 00:53:02.854 05:51:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61072 00:53:03.112 05:51:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:53:03.112 05:51:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:53:03.112 05:51:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61042 ]] 00:53:03.112 05:51:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61042 00:53:03.112 05:51:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61042 ']' 00:53:03.112 05:51:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61042 00:53:03.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61042) - No such process 00:53:03.112 Process with pid 61042 is not found 00:53:03.112 05:51:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61042 is not found' 00:53:03.112 05:51:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61072 ]] 00:53:03.112 05:51:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61072 00:53:03.112 05:51:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61072 ']' 00:53:03.112 05:51:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61072 00:53:03.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61072) - No such process 00:53:03.112 Process with pid 61072 is not found 00:53:03.113 05:51:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61072 is not found' 00:53:03.113 05:51:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:53:03.113 00:53:03.113 real 0m15.270s 00:53:03.113 user 0m27.219s 00:53:03.113 sys 0m4.237s 00:53:03.113 05:51:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:03.113 ************************************ 00:53:03.113 END TEST cpu_locks 00:53:03.113 ************************************ 00:53:03.113 05:51:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:53:03.113 00:53:03.113 real 0m41.614s 00:53:03.113 user 1m21.400s 00:53:03.113 sys 0m7.536s 00:53:03.113 05:51:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:03.113 05:51:57 event -- common/autotest_common.sh@10 -- # set +x 00:53:03.113 ************************************ 00:53:03.113 END TEST event 00:53:03.113 ************************************ 00:53:03.113 05:51:57 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:53:03.113 05:51:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:03.113 05:51:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:03.113 05:51:57 -- common/autotest_common.sh@10 -- # set +x 00:53:03.113 ************************************ 00:53:03.113 START TEST thread 00:53:03.113 ************************************ 00:53:03.113 05:51:57 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:53:03.113 * Looking for test storage... 00:53:03.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:53:03.113 05:51:57 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:03.113 05:51:57 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:53:03.113 05:51:57 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:03.113 05:51:57 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:03.113 05:51:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:03.113 05:51:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:03.113 05:51:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:03.113 05:51:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:53:03.113 05:51:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:53:03.113 05:51:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:53:03.113 05:51:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:53:03.113 05:51:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:53:03.113 05:51:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:53:03.113 05:51:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:53:03.113 05:51:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:03.113 05:51:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:53:03.113 05:51:57 thread -- scripts/common.sh@345 -- # : 1 00:53:03.113 05:51:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:03.113 05:51:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:03.113 05:51:57 thread -- scripts/common.sh@365 -- # decimal 1 00:53:03.371 05:51:57 thread -- scripts/common.sh@353 -- # local d=1 00:53:03.371 05:51:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:03.371 05:51:57 thread -- scripts/common.sh@355 -- # echo 1 00:53:03.371 05:51:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:53:03.371 05:51:57 thread -- scripts/common.sh@366 -- # decimal 2 00:53:03.371 05:51:57 thread -- scripts/common.sh@353 -- # local d=2 00:53:03.371 05:51:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:03.371 05:51:57 thread -- scripts/common.sh@355 -- # echo 2 00:53:03.371 05:51:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:53:03.371 05:51:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:03.371 05:51:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:03.371 05:51:57 thread -- scripts/common.sh@368 -- # return 0 00:53:03.371 05:51:57 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:03.371 05:51:57 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:03.371 --rc genhtml_branch_coverage=1 00:53:03.371 --rc genhtml_function_coverage=1 00:53:03.371 --rc genhtml_legend=1 00:53:03.371 --rc geninfo_all_blocks=1 00:53:03.371 --rc geninfo_unexecuted_blocks=1 00:53:03.371 00:53:03.371 ' 00:53:03.371 05:51:57 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:03.371 --rc genhtml_branch_coverage=1 00:53:03.371 --rc genhtml_function_coverage=1 00:53:03.371 --rc genhtml_legend=1 00:53:03.371 --rc geninfo_all_blocks=1 00:53:03.371 --rc geninfo_unexecuted_blocks=1 00:53:03.371 00:53:03.371 ' 00:53:03.371 05:51:57 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:03.371 --rc genhtml_branch_coverage=1 00:53:03.371 --rc genhtml_function_coverage=1 00:53:03.371 --rc genhtml_legend=1 00:53:03.371 --rc geninfo_all_blocks=1 00:53:03.371 --rc geninfo_unexecuted_blocks=1 00:53:03.371 00:53:03.371 ' 00:53:03.371 05:51:57 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:03.371 --rc genhtml_branch_coverage=1 00:53:03.371 --rc genhtml_function_coverage=1 00:53:03.371 --rc genhtml_legend=1 00:53:03.371 --rc geninfo_all_blocks=1 00:53:03.371 --rc geninfo_unexecuted_blocks=1 00:53:03.371 00:53:03.371 ' 00:53:03.371 05:51:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:53:03.371 05:51:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:53:03.371 05:51:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:03.371 05:51:57 thread -- common/autotest_common.sh@10 -- # set +x 00:53:03.371 ************************************ 00:53:03.371 START TEST thread_poller_perf 00:53:03.371 ************************************ 00:53:03.371 05:51:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:53:03.371 [2024-12-09 05:51:57.731785] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:53:03.371 [2024-12-09 05:51:57.732313] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61213 ] 00:53:03.371 [2024-12-09 05:51:57.870219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:03.371 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:53:03.371 [2024-12-09 05:51:57.898654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:04.745 [2024-12-09T05:51:59.331Z] ====================================== 00:53:04.745 [2024-12-09T05:51:59.331Z] busy:2205403162 (cyc) 00:53:04.745 [2024-12-09T05:51:59.331Z] total_run_count: 399000 00:53:04.745 [2024-12-09T05:51:59.331Z] tsc_hz: 2200000000 (cyc) 00:53:04.745 [2024-12-09T05:51:59.331Z] ====================================== 00:53:04.745 [2024-12-09T05:51:59.331Z] poller_cost: 5527 (cyc), 2512 (nsec) 00:53:04.745 00:53:04.745 real 0m1.223s 00:53:04.745 user 0m1.083s 00:53:04.745 sys 0m0.034s 00:53:04.745 05:51:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:04.745 05:51:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:53:04.745 ************************************ 00:53:04.745 END TEST thread_poller_perf 00:53:04.745 ************************************ 00:53:04.745 05:51:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:53:04.745 05:51:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:53:04.745 05:51:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:04.745 05:51:58 thread -- common/autotest_common.sh@10 -- # set +x 00:53:04.745 ************************************ 00:53:04.745 START TEST thread_poller_perf 00:53:04.745 ************************************ 00:53:04.745 05:51:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:53:04.745 [2024-12-09 05:51:59.011224] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:53:04.745 [2024-12-09 05:51:59.011322] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61243 ] 00:53:04.745 [2024-12-09 05:51:59.154505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:04.745 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:53:04.745 [2024-12-09 05:51:59.184183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:05.680 [2024-12-09T05:52:00.266Z] ====================================== 00:53:05.680 [2024-12-09T05:52:00.266Z] busy:2201787410 (cyc) 00:53:05.680 [2024-12-09T05:52:00.266Z] total_run_count: 4783000 00:53:05.680 [2024-12-09T05:52:00.266Z] tsc_hz: 2200000000 (cyc) 00:53:05.680 [2024-12-09T05:52:00.266Z] ====================================== 00:53:05.680 [2024-12-09T05:52:00.266Z] poller_cost: 460 (cyc), 209 (nsec) 00:53:05.681 00:53:05.681 real 0m1.226s 00:53:05.681 user 0m1.079s 00:53:05.681 sys 0m0.040s 00:53:05.681 05:52:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:05.681 05:52:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:53:05.681 ************************************ 00:53:05.681 END TEST thread_poller_perf 00:53:05.681 ************************************ 00:53:05.681 05:52:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:53:05.939 00:53:05.939 real 0m2.713s 00:53:05.939 user 0m2.285s 00:53:05.939 sys 0m0.209s 00:53:05.939 05:52:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:05.939 05:52:00 thread -- common/autotest_common.sh@10 -- # set +x 00:53:05.939 ************************************ 00:53:05.939 END TEST thread 00:53:05.939 ************************************ 00:53:05.939 05:52:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:53:05.939 05:52:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:53:05.939 05:52:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:05.940 05:52:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:05.940 05:52:00 -- common/autotest_common.sh@10 -- # set +x 00:53:05.940 ************************************ 00:53:05.940 START TEST app_cmdline 00:53:05.940 ************************************ 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:53:05.940 * Looking for test storage... 00:53:05.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:05.940 05:52:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:05.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:05.940 --rc genhtml_branch_coverage=1 00:53:05.940 --rc genhtml_function_coverage=1 00:53:05.940 --rc genhtml_legend=1 00:53:05.940 --rc geninfo_all_blocks=1 00:53:05.940 --rc geninfo_unexecuted_blocks=1 00:53:05.940 00:53:05.940 ' 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:05.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:05.940 --rc genhtml_branch_coverage=1 00:53:05.940 --rc genhtml_function_coverage=1 00:53:05.940 --rc genhtml_legend=1 00:53:05.940 --rc geninfo_all_blocks=1 00:53:05.940 --rc geninfo_unexecuted_blocks=1 00:53:05.940 00:53:05.940 ' 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:05.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:05.940 --rc genhtml_branch_coverage=1 00:53:05.940 --rc genhtml_function_coverage=1 00:53:05.940 --rc genhtml_legend=1 00:53:05.940 --rc geninfo_all_blocks=1 00:53:05.940 --rc geninfo_unexecuted_blocks=1 00:53:05.940 00:53:05.940 ' 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:05.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:05.940 --rc genhtml_branch_coverage=1 00:53:05.940 --rc genhtml_function_coverage=1 00:53:05.940 --rc genhtml_legend=1 00:53:05.940 --rc geninfo_all_blocks=1 00:53:05.940 --rc geninfo_unexecuted_blocks=1 00:53:05.940 00:53:05.940 ' 00:53:05.940 05:52:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:53:05.940 05:52:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61326 00:53:05.940 05:52:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61326 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61326 ']' 00:53:05.940 05:52:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:05.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:05.940 05:52:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:53:06.198 [2024-12-09 05:52:00.571614] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:53:06.198 [2024-12-09 05:52:00.571728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61326 ] 00:53:06.198 [2024-12-09 05:52:00.715259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:06.198 [2024-12-09 05:52:00.745000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:06.457 05:52:00 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:06.457 05:52:00 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:53:06.457 05:52:00 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:53:06.715 { 00:53:06.716 "fields": { 00:53:06.716 "commit": "15ce1ba92", 00:53:06.716 "major": 25, 00:53:06.716 "minor": 1, 00:53:06.716 "patch": 0, 00:53:06.716 "suffix": "-pre" 00:53:06.716 }, 00:53:06.716 "version": "SPDK v25.01-pre git sha1 15ce1ba92" 00:53:06.716 } 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:53:06.716 05:52:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:53:06.716 05:52:01 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:53:06.974 2024/12/09 05:52:01 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:53:06.974 request: 00:53:06.974 { 00:53:06.974 "method": "env_dpdk_get_mem_stats", 00:53:06.974 "params": {} 00:53:06.974 } 00:53:06.974 Got JSON-RPC error response 00:53:06.974 GoRPCClient: error on JSON-RPC call 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:53:06.974 05:52:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61326 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61326 ']' 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61326 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:06.974 05:52:01 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61326 00:53:07.232 05:52:01 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:53:07.232 05:52:01 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:53:07.232 killing process with pid 61326 00:53:07.232 05:52:01 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61326' 00:53:07.232 05:52:01 app_cmdline -- common/autotest_common.sh@973 -- # kill 61326 00:53:07.232 05:52:01 app_cmdline -- common/autotest_common.sh@978 -- # wait 61326 00:53:07.232 00:53:07.232 real 0m1.475s 00:53:07.232 user 0m1.960s 00:53:07.232 sys 0m0.374s 00:53:07.232 05:52:01 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:07.232 05:52:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:53:07.232 ************************************ 00:53:07.232 END TEST app_cmdline 00:53:07.232 ************************************ 00:53:07.490 05:52:01 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:53:07.490 05:52:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:53:07.490 05:52:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:07.490 05:52:01 -- common/autotest_common.sh@10 -- # set +x 00:53:07.490 ************************************ 00:53:07.490 START TEST version 00:53:07.490 ************************************ 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:53:07.490 * Looking for test storage... 00:53:07.490 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1711 -- # lcov --version 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:07.490 05:52:01 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:07.490 05:52:01 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:07.490 05:52:01 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:07.490 05:52:01 version -- scripts/common.sh@336 -- # IFS=.-: 00:53:07.490 05:52:01 version -- scripts/common.sh@336 -- # read -ra ver1 00:53:07.490 05:52:01 version -- scripts/common.sh@337 -- # IFS=.-: 00:53:07.490 05:52:01 version -- scripts/common.sh@337 -- # read -ra ver2 00:53:07.490 05:52:01 version -- scripts/common.sh@338 -- # local 'op=<' 00:53:07.490 05:52:01 version -- scripts/common.sh@340 -- # ver1_l=2 00:53:07.490 05:52:01 version -- scripts/common.sh@341 -- # ver2_l=1 00:53:07.490 05:52:01 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:07.490 05:52:01 version -- scripts/common.sh@344 -- # case "$op" in 00:53:07.490 05:52:01 version -- scripts/common.sh@345 -- # : 1 00:53:07.490 05:52:01 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:07.490 05:52:01 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:07.490 05:52:01 version -- scripts/common.sh@365 -- # decimal 1 00:53:07.490 05:52:01 version -- scripts/common.sh@353 -- # local d=1 00:53:07.490 05:52:01 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:07.490 05:52:01 version -- scripts/common.sh@355 -- # echo 1 00:53:07.490 05:52:01 version -- scripts/common.sh@365 -- # ver1[v]=1 00:53:07.490 05:52:01 version -- scripts/common.sh@366 -- # decimal 2 00:53:07.490 05:52:01 version -- scripts/common.sh@353 -- # local d=2 00:53:07.490 05:52:01 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:07.490 05:52:01 version -- scripts/common.sh@355 -- # echo 2 00:53:07.490 05:52:01 version -- scripts/common.sh@366 -- # ver2[v]=2 00:53:07.490 05:52:01 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:07.490 05:52:01 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:07.490 05:52:01 version -- scripts/common.sh@368 -- # return 0 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:07.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.490 --rc genhtml_branch_coverage=1 00:53:07.490 --rc genhtml_function_coverage=1 00:53:07.490 --rc genhtml_legend=1 00:53:07.490 --rc geninfo_all_blocks=1 00:53:07.490 --rc geninfo_unexecuted_blocks=1 00:53:07.490 00:53:07.490 ' 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:07.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.490 --rc genhtml_branch_coverage=1 00:53:07.490 --rc genhtml_function_coverage=1 00:53:07.490 --rc genhtml_legend=1 00:53:07.490 --rc geninfo_all_blocks=1 00:53:07.490 --rc geninfo_unexecuted_blocks=1 00:53:07.490 00:53:07.490 ' 00:53:07.490 05:52:01 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:07.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.491 --rc genhtml_branch_coverage=1 00:53:07.491 --rc genhtml_function_coverage=1 00:53:07.491 --rc genhtml_legend=1 00:53:07.491 --rc geninfo_all_blocks=1 00:53:07.491 --rc geninfo_unexecuted_blocks=1 00:53:07.491 00:53:07.491 ' 00:53:07.491 05:52:01 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:07.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:07.491 --rc genhtml_branch_coverage=1 00:53:07.491 --rc genhtml_function_coverage=1 00:53:07.491 --rc genhtml_legend=1 00:53:07.491 --rc geninfo_all_blocks=1 00:53:07.491 --rc geninfo_unexecuted_blocks=1 00:53:07.491 00:53:07.491 ' 00:53:07.491 05:52:01 version -- app/version.sh@17 -- # get_header_version major 00:53:07.491 05:52:01 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:53:07.491 05:52:01 version -- app/version.sh@14 -- # cut -f2 00:53:07.491 05:52:01 version -- app/version.sh@14 -- # tr -d '"' 00:53:07.491 05:52:02 version -- app/version.sh@17 -- # major=25 00:53:07.491 05:52:02 version -- app/version.sh@18 -- # get_header_version minor 00:53:07.491 05:52:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:53:07.491 05:52:02 version -- app/version.sh@14 -- # cut -f2 00:53:07.491 05:52:02 version -- app/version.sh@14 -- # tr -d '"' 00:53:07.491 05:52:02 version -- app/version.sh@18 -- # minor=1 00:53:07.491 05:52:02 version -- app/version.sh@19 -- # get_header_version patch 00:53:07.491 05:52:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:53:07.491 05:52:02 version -- app/version.sh@14 -- # cut -f2 00:53:07.491 05:52:02 version -- app/version.sh@14 -- # tr -d '"' 00:53:07.491 05:52:02 version -- app/version.sh@19 -- # patch=0 00:53:07.491 05:52:02 version -- app/version.sh@20 -- # get_header_version suffix 00:53:07.491 05:52:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:53:07.491 05:52:02 version -- app/version.sh@14 -- # cut -f2 00:53:07.491 05:52:02 version -- app/version.sh@14 -- # tr -d '"' 00:53:07.491 05:52:02 version -- app/version.sh@20 -- # suffix=-pre 00:53:07.491 05:52:02 version -- app/version.sh@22 -- # version=25.1 00:53:07.491 05:52:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:53:07.491 05:52:02 version -- app/version.sh@28 -- # version=25.1rc0 00:53:07.491 05:52:02 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:53:07.491 05:52:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:53:07.491 05:52:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:53:07.491 05:52:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:53:07.491 00:53:07.491 real 0m0.221s 00:53:07.491 user 0m0.145s 00:53:07.491 sys 0m0.116s 00:53:07.491 05:52:02 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:07.491 05:52:02 version -- common/autotest_common.sh@10 -- # set +x 00:53:07.491 ************************************ 00:53:07.491 END TEST version 00:53:07.491 ************************************ 00:53:07.749 05:52:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:53:07.749 05:52:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:53:07.749 05:52:02 -- spdk/autotest.sh@194 -- # uname -s 00:53:07.749 05:52:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:53:07.749 05:52:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:53:07.749 05:52:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:53:07.749 05:52:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:53:07.749 05:52:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:53:07.749 05:52:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:53:07.749 05:52:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:07.749 05:52:02 -- common/autotest_common.sh@10 -- # set +x 00:53:07.749 05:52:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:53:07.749 05:52:02 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:53:07.749 05:52:02 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:53:07.749 05:52:02 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:53:07.749 05:52:02 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:53:07.749 05:52:02 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:53:07.749 05:52:02 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:53:07.749 05:52:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:07.749 05:52:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:07.749 05:52:02 -- common/autotest_common.sh@10 -- # set +x 00:53:07.749 ************************************ 00:53:07.749 START TEST nvmf_tcp 00:53:07.749 ************************************ 00:53:07.749 05:52:02 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:53:07.749 * Looking for test storage... 00:53:07.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:53:07.749 05:52:02 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:07.749 05:52:02 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:53:07.749 05:52:02 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:07.749 05:52:02 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:07.749 05:52:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:53:07.750 05:52:02 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:53:07.750 05:52:02 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:53:08.009 05:52:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:53:08.009 05:52:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:08.009 05:52:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:53:08.009 05:52:02 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:53:08.009 05:52:02 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:08.009 05:52:02 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:08.009 05:52:02 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:53:08.009 05:52:02 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:08.009 05:52:02 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:08.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.009 --rc genhtml_branch_coverage=1 00:53:08.009 --rc genhtml_function_coverage=1 00:53:08.009 --rc genhtml_legend=1 00:53:08.009 --rc geninfo_all_blocks=1 00:53:08.009 --rc geninfo_unexecuted_blocks=1 00:53:08.009 00:53:08.009 ' 00:53:08.009 05:52:02 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:08.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.009 --rc genhtml_branch_coverage=1 00:53:08.009 --rc genhtml_function_coverage=1 00:53:08.009 --rc genhtml_legend=1 00:53:08.009 --rc geninfo_all_blocks=1 00:53:08.009 --rc geninfo_unexecuted_blocks=1 00:53:08.009 00:53:08.009 ' 00:53:08.009 05:52:02 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:08.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.009 --rc genhtml_branch_coverage=1 00:53:08.009 --rc genhtml_function_coverage=1 00:53:08.009 --rc genhtml_legend=1 00:53:08.009 --rc geninfo_all_blocks=1 00:53:08.009 --rc geninfo_unexecuted_blocks=1 00:53:08.009 00:53:08.009 ' 00:53:08.009 05:52:02 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:08.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.009 --rc genhtml_branch_coverage=1 00:53:08.009 --rc genhtml_function_coverage=1 00:53:08.009 --rc genhtml_legend=1 00:53:08.009 --rc geninfo_all_blocks=1 00:53:08.009 --rc geninfo_unexecuted_blocks=1 00:53:08.009 00:53:08.009 ' 00:53:08.009 05:52:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:53:08.009 05:52:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:53:08.009 05:52:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:53:08.009 05:52:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:08.009 05:52:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:08.009 05:52:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:08.009 ************************************ 00:53:08.009 START TEST nvmf_target_core 00:53:08.009 ************************************ 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:53:08.009 * Looking for test storage... 00:53:08.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:08.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.009 --rc genhtml_branch_coverage=1 00:53:08.009 --rc genhtml_function_coverage=1 00:53:08.009 --rc genhtml_legend=1 00:53:08.009 --rc geninfo_all_blocks=1 00:53:08.009 --rc geninfo_unexecuted_blocks=1 00:53:08.009 00:53:08.009 ' 00:53:08.009 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:08.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.009 --rc genhtml_branch_coverage=1 00:53:08.010 --rc genhtml_function_coverage=1 00:53:08.010 --rc genhtml_legend=1 00:53:08.010 --rc geninfo_all_blocks=1 00:53:08.010 --rc geninfo_unexecuted_blocks=1 00:53:08.010 00:53:08.010 ' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:08.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.010 --rc genhtml_branch_coverage=1 00:53:08.010 --rc genhtml_function_coverage=1 00:53:08.010 --rc genhtml_legend=1 00:53:08.010 --rc geninfo_all_blocks=1 00:53:08.010 --rc geninfo_unexecuted_blocks=1 00:53:08.010 00:53:08.010 ' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:08.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.010 --rc genhtml_branch_coverage=1 00:53:08.010 --rc genhtml_function_coverage=1 00:53:08.010 --rc genhtml_legend=1 00:53:08.010 --rc geninfo_all_blocks=1 00:53:08.010 --rc geninfo_unexecuted_blocks=1 00:53:08.010 00:53:08.010 ' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:08.010 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:53:08.010 ************************************ 00:53:08.010 START TEST nvmf_abort 00:53:08.010 ************************************ 00:53:08.010 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:53:08.269 * Looking for test storage... 00:53:08.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:08.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.269 --rc genhtml_branch_coverage=1 00:53:08.269 --rc genhtml_function_coverage=1 00:53:08.269 --rc genhtml_legend=1 00:53:08.269 --rc geninfo_all_blocks=1 00:53:08.269 --rc geninfo_unexecuted_blocks=1 00:53:08.269 00:53:08.269 ' 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:08.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.269 --rc genhtml_branch_coverage=1 00:53:08.269 --rc genhtml_function_coverage=1 00:53:08.269 --rc genhtml_legend=1 00:53:08.269 --rc geninfo_all_blocks=1 00:53:08.269 --rc geninfo_unexecuted_blocks=1 00:53:08.269 00:53:08.269 ' 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:08.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.269 --rc genhtml_branch_coverage=1 00:53:08.269 --rc genhtml_function_coverage=1 00:53:08.269 --rc genhtml_legend=1 00:53:08.269 --rc geninfo_all_blocks=1 00:53:08.269 --rc geninfo_unexecuted_blocks=1 00:53:08.269 00:53:08.269 ' 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:08.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:08.269 --rc genhtml_branch_coverage=1 00:53:08.269 --rc genhtml_function_coverage=1 00:53:08.269 --rc genhtml_legend=1 00:53:08.269 --rc geninfo_all_blocks=1 00:53:08.269 --rc geninfo_unexecuted_blocks=1 00:53:08.269 00:53:08.269 ' 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.269 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:08.270 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:08.270 Cannot find device "nvmf_init_br" 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:08.270 Cannot find device "nvmf_init_br2" 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:08.270 Cannot find device "nvmf_tgt_br" 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:08.270 Cannot find device "nvmf_tgt_br2" 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:08.270 Cannot find device "nvmf_init_br" 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:53:08.270 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:08.529 Cannot find device "nvmf_init_br2" 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:08.529 Cannot find device "nvmf_tgt_br" 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:08.529 Cannot find device "nvmf_tgt_br2" 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:08.529 Cannot find device "nvmf_br" 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:08.529 Cannot find device "nvmf_init_if" 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:08.529 Cannot find device "nvmf_init_if2" 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:08.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:08.529 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:08.529 05:52:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:08.529 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:08.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:08.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:53:08.788 00:53:08.788 --- 10.0.0.3 ping statistics --- 00:53:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:08.788 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:08.788 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:08.788 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:53:08.788 00:53:08.788 --- 10.0.0.4 ping statistics --- 00:53:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:08.788 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:08.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:08.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:53:08.788 00:53:08.788 --- 10.0.0.1 ping statistics --- 00:53:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:08.788 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:08.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:08.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:53:08.788 00:53:08.788 --- 10.0.0.2 ping statistics --- 00:53:08.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:08.788 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=61740 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 61740 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 61740 ']' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:08.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:08.788 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:08.788 [2024-12-09 05:52:03.305537] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:53:08.788 [2024-12-09 05:52:03.305628] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:09.047 [2024-12-09 05:52:03.458773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:09.047 [2024-12-09 05:52:03.500414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:09.047 [2024-12-09 05:52:03.500481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:09.047 [2024-12-09 05:52:03.500496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:09.047 [2024-12-09 05:52:03.500506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:09.047 [2024-12-09 05:52:03.500515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:09.047 [2024-12-09 05:52:03.501431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:09.047 [2024-12-09 05:52:03.501572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:09.047 [2024-12-09 05:52:03.501573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:09.047 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:09.047 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:53:09.048 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:09.048 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:09.048 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.307 [2024-12-09 05:52:03.649807] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.307 Malloc0 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.307 Delay0 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.307 [2024-12-09 05:52:03.720471] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:09.307 05:52:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:53:09.567 [2024-12-09 05:52:03.910705] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:53:11.470 Initializing NVMe Controllers 00:53:11.470 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:53:11.470 controller IO queue size 128 less than required 00:53:11.470 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:53:11.470 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:53:11.470 Initialization complete. Launching workers. 00:53:11.470 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29423 00:53:11.470 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29484, failed to submit 62 00:53:11.470 success 29427, unsuccessful 57, failed 0 00:53:11.470 05:52:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:53:11.470 05:52:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:11.470 05:52:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:11.470 05:52:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:11.470 05:52:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:53:11.470 05:52:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:53:11.470 05:52:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:11.470 05:52:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:11.470 rmmod nvme_tcp 00:53:11.470 rmmod nvme_fabrics 00:53:11.470 rmmod nvme_keyring 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 61740 ']' 00:53:11.470 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 61740 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 61740 ']' 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 61740 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61740 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:11.729 killing process with pid 61740 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61740' 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 61740 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 61740 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:53:11.729 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:11.730 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:53:11.989 00:53:11.989 real 0m3.867s 00:53:11.989 user 0m10.162s 00:53:11.989 sys 0m1.008s 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:11.989 ************************************ 00:53:11.989 END TEST nvmf_abort 00:53:11.989 ************************************ 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:53:11.989 ************************************ 00:53:11.989 START TEST nvmf_ns_hotplug_stress 00:53:11.989 ************************************ 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:53:11.989 * Looking for test storage... 00:53:11.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:53:11.989 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:53:12.249 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:12.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:12.250 --rc genhtml_branch_coverage=1 00:53:12.250 --rc genhtml_function_coverage=1 00:53:12.250 --rc genhtml_legend=1 00:53:12.250 --rc geninfo_all_blocks=1 00:53:12.250 --rc geninfo_unexecuted_blocks=1 00:53:12.250 00:53:12.250 ' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:12.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:12.250 --rc genhtml_branch_coverage=1 00:53:12.250 --rc genhtml_function_coverage=1 00:53:12.250 --rc genhtml_legend=1 00:53:12.250 --rc geninfo_all_blocks=1 00:53:12.250 --rc geninfo_unexecuted_blocks=1 00:53:12.250 00:53:12.250 ' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:12.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:12.250 --rc genhtml_branch_coverage=1 00:53:12.250 --rc genhtml_function_coverage=1 00:53:12.250 --rc genhtml_legend=1 00:53:12.250 --rc geninfo_all_blocks=1 00:53:12.250 --rc geninfo_unexecuted_blocks=1 00:53:12.250 00:53:12.250 ' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:12.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:12.250 --rc genhtml_branch_coverage=1 00:53:12.250 --rc genhtml_function_coverage=1 00:53:12.250 --rc genhtml_legend=1 00:53:12.250 --rc geninfo_all_blocks=1 00:53:12.250 --rc geninfo_unexecuted_blocks=1 00:53:12.250 00:53:12.250 ' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:12.250 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:12.250 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:12.251 Cannot find device "nvmf_init_br" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:12.251 Cannot find device "nvmf_init_br2" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:12.251 Cannot find device "nvmf_tgt_br" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:12.251 Cannot find device "nvmf_tgt_br2" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:12.251 Cannot find device "nvmf_init_br" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:12.251 Cannot find device "nvmf_init_br2" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:12.251 Cannot find device "nvmf_tgt_br" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:12.251 Cannot find device "nvmf_tgt_br2" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:12.251 Cannot find device "nvmf_br" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:12.251 Cannot find device "nvmf_init_if" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:12.251 Cannot find device "nvmf_init_if2" 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:12.251 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:53:12.251 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:12.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:12.510 05:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:12.510 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:12.510 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:12.510 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:12.511 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:12.511 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:53:12.511 00:53:12.511 --- 10.0.0.3 ping statistics --- 00:53:12.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:12.511 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:12.511 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:12.511 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:53:12.511 00:53:12.511 --- 10.0.0.4 ping statistics --- 00:53:12.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:12.511 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:12.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:12.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:53:12.511 00:53:12.511 --- 10.0.0.1 ping statistics --- 00:53:12.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:12.511 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:12.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:12.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:53:12.511 00:53:12.511 --- 10.0.0.2 ping statistics --- 00:53:12.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:12.511 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:12.511 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:53:12.770 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62019 00:53:12.770 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:53:12.770 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62019 00:53:12.770 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62019 ']' 00:53:12.770 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:12.770 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:12.770 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:12.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:12.771 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:12.771 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:53:12.771 [2024-12-09 05:52:07.154282] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:53:12.771 [2024-12-09 05:52:07.154363] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:12.771 [2024-12-09 05:52:07.296703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:53:12.771 [2024-12-09 05:52:07.324640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:12.771 [2024-12-09 05:52:07.324728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:12.771 [2024-12-09 05:52:07.324738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:12.771 [2024-12-09 05:52:07.324746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:12.771 [2024-12-09 05:52:07.324752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:12.771 [2024-12-09 05:52:07.325451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:53:12.771 [2024-12-09 05:52:07.325591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:53:12.771 [2024-12-09 05:52:07.325594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:13.030 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:13.030 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:53:13.030 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:13.030 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:13.030 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:53:13.030 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:13.030 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:53:13.030 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:53:13.289 [2024-12-09 05:52:07.737492] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:13.289 05:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:53:13.586 05:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:13.901 [2024-12-09 05:52:08.287312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:13.901 05:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:53:14.160 05:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:53:14.419 Malloc0 00:53:14.419 05:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:53:14.678 Delay0 00:53:14.678 05:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:14.678 05:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:53:14.937 NULL1 00:53:14.937 05:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:53:15.197 05:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62137 00:53:15.197 05:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:53:15.197 05:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:15.197 05:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:16.577 Read completed with error (sct=0, sc=11) 00:53:16.577 05:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:16.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:16.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:16.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:16.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:16.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:16.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:16.836 05:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:53:16.836 05:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:53:17.095 true 00:53:17.095 05:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:17.095 05:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:17.663 05:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:17.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:17.921 05:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:53:17.921 05:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:53:18.180 true 00:53:18.180 05:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:18.180 05:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:18.439 05:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:18.697 05:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:53:18.697 05:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:53:18.955 true 00:53:18.955 05:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:18.955 05:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:19.889 05:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:20.147 05:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:53:20.147 05:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:53:20.406 true 00:53:20.406 05:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:20.406 05:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:20.406 05:52:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:20.663 05:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:53:20.663 05:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:53:20.920 true 00:53:20.920 05:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:20.920 05:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:21.853 05:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:22.112 05:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:53:22.112 05:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:53:22.369 true 00:53:22.369 05:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:22.369 05:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:22.627 05:52:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:22.886 05:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:53:22.886 05:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:53:22.886 true 00:53:22.886 05:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:22.886 05:52:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:23.821 05:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:24.080 05:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:53:24.080 05:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:53:24.338 true 00:53:24.338 05:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:24.338 05:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:24.597 05:52:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:24.856 05:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:53:24.856 05:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:53:24.856 true 00:53:25.115 05:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:25.115 05:52:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:26.052 05:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:26.052 05:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:53:26.052 05:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:53:26.311 true 00:53:26.311 05:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:26.311 05:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:26.582 05:52:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:26.849 05:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:53:26.849 05:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:53:26.849 true 00:53:26.849 05:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:26.849 05:52:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:27.786 05:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:28.044 05:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:53:28.044 05:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:53:28.303 true 00:53:28.303 05:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:28.303 05:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:28.562 05:52:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:28.820 05:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:53:28.820 05:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:53:29.079 true 00:53:29.079 05:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:29.079 05:52:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:30.014 05:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:30.014 05:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:53:30.014 05:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:53:30.276 true 00:53:30.276 05:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:30.276 05:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:30.535 05:52:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:30.793 05:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:53:30.793 05:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:53:31.051 true 00:53:31.051 05:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:31.051 05:52:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:31.986 05:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:31.986 05:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:53:31.986 05:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:53:32.245 true 00:53:32.245 05:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:32.245 05:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:32.503 05:52:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:32.762 05:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:53:32.762 05:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:53:33.022 true 00:53:33.022 05:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:33.022 05:52:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:33.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:33.960 05:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:34.219 05:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:53:34.219 05:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:53:34.219 true 00:53:34.219 05:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:34.219 05:52:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:34.478 05:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:34.737 05:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:53:34.737 05:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:53:34.996 true 00:53:34.996 05:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:34.996 05:52:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:35.935 05:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:36.193 05:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:53:36.194 05:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:53:36.453 true 00:53:36.453 05:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:36.453 05:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:36.712 05:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:36.972 05:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:53:36.972 05:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:53:36.972 true 00:53:36.972 05:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:36.972 05:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:37.231 05:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:37.492 05:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:53:37.492 05:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:53:37.768 true 00:53:37.768 05:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:37.768 05:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:39.159 05:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:39.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:39.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:39.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:39.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:39.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:39.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:39.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:39.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:39.159 05:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:53:39.159 05:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:53:39.419 true 00:53:39.419 05:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:39.419 05:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:40.355 05:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:40.355 05:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:53:40.355 05:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:53:40.613 true 00:53:40.613 05:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:40.613 05:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:40.871 05:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:41.130 05:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:53:41.130 05:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:53:41.389 true 00:53:41.389 05:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:41.389 05:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:41.648 05:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:41.907 05:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:53:41.907 05:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:53:42.165 true 00:53:42.165 05:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:42.165 05:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:43.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:43.100 05:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:43.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:43.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:43.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:43.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:43.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:43.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:53:43.358 05:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:53:43.358 05:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:53:43.617 true 00:53:43.617 05:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:43.617 05:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:44.553 05:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:44.812 05:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:53:44.812 05:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:53:44.812 true 00:53:45.071 05:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:45.071 05:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:45.071 05:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:45.330 05:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:53:45.330 05:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:53:45.589 true 00:53:45.589 05:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:45.589 05:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:46.528 Initializing NVMe Controllers 00:53:46.528 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:53:46.528 Controller IO queue size 128, less than required. 00:53:46.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:53:46.528 Controller IO queue size 128, less than required. 00:53:46.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:53:46.528 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:53:46.528 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:53:46.528 Initialization complete. Launching workers. 00:53:46.528 ======================================================== 00:53:46.528 Latency(us) 00:53:46.528 Device Information : IOPS MiB/s Average min max 00:53:46.528 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 846.37 0.41 88760.79 3112.30 1012733.72 00:53:46.528 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12501.90 6.10 10238.10 3197.47 518091.31 00:53:46.528 ======================================================== 00:53:46.528 Total : 13348.27 6.52 15216.95 3112.30 1012733.72 00:53:46.528 00:53:46.528 05:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:46.787 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:53:46.787 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:53:46.787 true 00:53:47.047 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62137 00:53:47.048 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62137) - No such process 00:53:47.048 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62137 00:53:47.048 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:47.048 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:47.307 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:53:47.307 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:53:47.307 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:53:47.307 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:47.307 05:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:53:47.566 null0 00:53:47.566 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:53:47.566 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:47.566 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:53:47.826 null1 00:53:47.826 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:53:47.826 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:47.826 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:53:48.106 null2 00:53:48.107 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:53:48.107 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:48.107 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:53:48.366 null3 00:53:48.366 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:53:48.366 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:48.366 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:53:48.366 null4 00:53:48.366 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:53:48.366 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:48.366 05:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:53:48.625 null5 00:53:48.625 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:53:48.625 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:48.625 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:53:48.884 null6 00:53:48.884 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:53:48.884 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:48.884 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:53:49.145 null7 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.145 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63205 63206 63208 63210 63211 63213 63216 63217 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.146 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:49.405 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:49.405 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:49.405 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:49.405 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:49.405 05:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.665 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:49.925 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.184 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.442 05:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:50.442 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.442 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.442 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:50.700 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:50.700 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:50.700 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:50.700 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:50.700 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:50.700 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:50.958 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:51.216 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.473 05:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:51.473 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.473 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.473 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:51.473 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.473 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.473 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:51.732 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:51.991 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:51.991 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:51.991 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:51.991 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:51.991 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:51.991 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.249 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:52.507 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.507 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.507 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:52.507 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:52.507 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:52.507 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:52.507 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:52.507 05:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:52.507 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:52.507 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.765 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.766 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:52.766 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.766 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.766 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:52.766 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:52.766 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:52.766 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:53.024 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.282 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:53.540 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.540 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.540 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:53.540 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.540 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.540 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:53.540 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.540 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.541 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:53.541 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.541 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.541 05:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:53.541 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:53.541 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:53.541 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:53.799 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:54.069 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.069 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.069 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:54.070 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:54.327 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:54.327 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:54.327 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:54.327 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:54.327 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:54.327 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:54.584 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.584 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.584 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:53:54.584 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.584 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.584 05:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:53:54.584 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.584 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.585 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:53:54.843 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:55.103 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:53:55.362 rmmod nvme_tcp 00:53:55.362 rmmod nvme_fabrics 00:53:55.362 rmmod nvme_keyring 00:53:55.362 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62019 ']' 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62019 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62019 ']' 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62019 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62019 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62019' 00:53:55.363 killing process with pid 62019 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62019 00:53:55.363 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62019 00:53:55.621 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:53:55.621 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:53:55.621 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:53:55.621 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:53:55.621 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:53:55.621 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:53:55.621 05:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:55.621 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:53:55.880 00:53:55.880 real 0m43.740s 00:53:55.880 user 3m33.811s 00:53:55.880 sys 0m11.910s 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:53:55.880 ************************************ 00:53:55.880 END TEST nvmf_ns_hotplug_stress 00:53:55.880 ************************************ 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:53:55.880 ************************************ 00:53:55.880 START TEST nvmf_delete_subsystem 00:53:55.880 ************************************ 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:53:55.880 * Looking for test storage... 00:53:55.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:53:55.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:55.880 --rc genhtml_branch_coverage=1 00:53:55.880 --rc genhtml_function_coverage=1 00:53:55.880 --rc genhtml_legend=1 00:53:55.880 --rc geninfo_all_blocks=1 00:53:55.880 --rc geninfo_unexecuted_blocks=1 00:53:55.880 00:53:55.880 ' 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:53:55.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:55.880 --rc genhtml_branch_coverage=1 00:53:55.880 --rc genhtml_function_coverage=1 00:53:55.880 --rc genhtml_legend=1 00:53:55.880 --rc geninfo_all_blocks=1 00:53:55.880 --rc geninfo_unexecuted_blocks=1 00:53:55.880 00:53:55.880 ' 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:53:55.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:55.880 --rc genhtml_branch_coverage=1 00:53:55.880 --rc genhtml_function_coverage=1 00:53:55.880 --rc genhtml_legend=1 00:53:55.880 --rc geninfo_all_blocks=1 00:53:55.880 --rc geninfo_unexecuted_blocks=1 00:53:55.880 00:53:55.880 ' 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:53:55.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:53:55.880 --rc genhtml_branch_coverage=1 00:53:55.880 --rc genhtml_function_coverage=1 00:53:55.880 --rc genhtml_legend=1 00:53:55.880 --rc geninfo_all_blocks=1 00:53:55.880 --rc geninfo_unexecuted_blocks=1 00:53:55.880 00:53:55.880 ' 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:53:55.880 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:53:55.881 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:53:55.881 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:53:56.140 Cannot find device "nvmf_init_br" 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:53:56.140 Cannot find device "nvmf_init_br2" 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:53:56.140 Cannot find device "nvmf_tgt_br" 00:53:56.140 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:53:56.141 Cannot find device "nvmf_tgt_br2" 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:53:56.141 Cannot find device "nvmf_init_br" 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:53:56.141 Cannot find device "nvmf_init_br2" 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:53:56.141 Cannot find device "nvmf_tgt_br" 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:53:56.141 Cannot find device "nvmf_tgt_br2" 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:53:56.141 Cannot find device "nvmf_br" 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:53:56.141 Cannot find device "nvmf_init_if" 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:53:56.141 Cannot find device "nvmf_init_if2" 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:53:56.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:53:56.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:53:56.141 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:53:56.400 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:53:56.400 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:53:56.400 00:53:56.400 --- 10.0.0.3 ping statistics --- 00:53:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:56.400 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:53:56.400 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:53:56.400 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:53:56.400 00:53:56.400 --- 10.0.0.4 ping statistics --- 00:53:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:56.400 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:53:56.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:53:56.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:53:56.400 00:53:56.400 --- 10.0.0.1 ping statistics --- 00:53:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:56.400 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:53:56.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:53:56.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:53:56.400 00:53:56.400 --- 10.0.0.2 ping statistics --- 00:53:56.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:53:56.400 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=64602 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 64602 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 64602 ']' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:53:56.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:53:56.400 05:52:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.400 [2024-12-09 05:52:50.893382] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:53:56.400 [2024-12-09 05:52:50.893466] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:53:56.660 [2024-12-09 05:52:51.046416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:53:56.660 [2024-12-09 05:52:51.085721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:53:56.660 [2024-12-09 05:52:51.085796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:53:56.660 [2024-12-09 05:52:51.085811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:53:56.660 [2024-12-09 05:52:51.085821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:53:56.660 [2024-12-09 05:52:51.085829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:53:56.660 [2024-12-09 05:52:51.086719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:53:56.660 [2024-12-09 05:52:51.086733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.660 [2024-12-09 05:52:51.226495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:56.660 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.660 [2024-12-09 05:52:51.242637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.919 NULL1 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.919 Delay0 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=64639 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:53:56.919 05:52:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:53:56.919 [2024-12-09 05:52:51.447226] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:53:58.822 05:52:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:58.822 05:52:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:53:58.822 05:52:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Write completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 Read completed with error (sct=0, sc=8) 00:53:59.080 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 [2024-12-09 05:52:53.478897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18987e0 is same with the state(6) to be set 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 starting I/O failed: -6 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 [2024-12-09 05:52:53.480812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe3d4000c40 is same with the state(6) to be set 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Read completed with error (sct=0, sc=8) 00:53:59.081 Write completed with error (sct=0, sc=8) 00:54:00.018 [2024-12-09 05:52:54.460837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188caa0 is same with the state(6) to be set 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 [2024-12-09 05:52:54.478889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe3d400d020 is same with the state(6) to be set 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Write completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.018 [2024-12-09 05:52:54.479121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe3d400d800 is same with the state(6) to be set 00:54:00.018 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 [2024-12-09 05:52:54.481945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897a50 is same with the state(6) to be set 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 Read completed with error (sct=0, sc=8) 00:54:00.019 Write completed with error (sct=0, sc=8) 00:54:00.019 [2024-12-09 05:52:54.482569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189aea0 is same with the state(6) to be set 00:54:00.019 Initializing NVMe Controllers 00:54:00.019 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:54:00.019 Controller IO queue size 128, less than required. 00:54:00.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:00.019 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:54:00.019 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:54:00.019 Initialization complete. Launching workers. 00:54:00.019 ======================================================== 00:54:00.019 Latency(us) 00:54:00.019 Device Information : IOPS MiB/s Average min max 00:54:00.019 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.32 0.09 902296.30 738.74 1009806.84 00:54:00.019 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.95 0.07 1057135.25 317.13 2001652.47 00:54:00.019 ======================================================== 00:54:00.019 Total : 337.26 0.16 971596.45 317.13 2001652.47 00:54:00.019 00:54:00.019 [2024-12-09 05:52:54.483327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188caa0 (9): Bad file descriptor 00:54:00.019 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:54:00.019 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:00.019 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:54:00.019 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64639 00:54:00.019 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64639 00:54:00.588 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (64639) - No such process 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 64639 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 64639 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 64639 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:00.588 05:52:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:00.588 [2024-12-09 05:52:55.009987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=64685 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64685 00:54:00.588 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:00.881 [2024-12-09 05:52:55.189233] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:54:01.139 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:01.139 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64685 00:54:01.139 05:52:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:01.706 05:52:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:01.706 05:52:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64685 00:54:01.706 05:52:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:01.964 05:52:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:01.964 05:52:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64685 00:54:01.964 05:52:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:02.541 05:52:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:02.541 05:52:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64685 00:54:02.541 05:52:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:03.181 05:52:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:03.181 05:52:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64685 00:54:03.181 05:52:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:03.764 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:03.764 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64685 00:54:03.764 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:54:03.764 Initializing NVMe Controllers 00:54:03.764 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:54:03.764 Controller IO queue size 128, less than required. 00:54:03.764 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:03.764 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:54:03.764 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:54:03.764 Initialization complete. Launching workers. 00:54:03.764 ======================================================== 00:54:03.764 Latency(us) 00:54:03.764 Device Information : IOPS MiB/s Average min max 00:54:03.764 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002487.20 1000230.92 1041829.30 00:54:03.764 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004089.53 1000165.35 1042427.19 00:54:03.764 ======================================================== 00:54:03.764 Total : 256.00 0.12 1003288.36 1000165.35 1042427.19 00:54:03.764 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64685 00:54:04.024 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (64685) - No such process 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 64685 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:04.024 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:04.024 rmmod nvme_tcp 00:54:04.282 rmmod nvme_fabrics 00:54:04.282 rmmod nvme_keyring 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 64602 ']' 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 64602 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 64602 ']' 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 64602 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64602 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:04.282 killing process with pid 64602 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64602' 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 64602 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 64602 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:54:04.282 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:04.541 05:52:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:54:04.541 00:54:04.541 real 0m8.768s 00:54:04.541 user 0m27.281s 00:54:04.541 sys 0m1.463s 00:54:04.541 ************************************ 00:54:04.541 END TEST nvmf_delete_subsystem 00:54:04.541 ************************************ 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:54:04.541 ************************************ 00:54:04.541 START TEST nvmf_host_management 00:54:04.541 ************************************ 00:54:04.541 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:54:04.801 * Looking for test storage... 00:54:04.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:04.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:04.801 --rc genhtml_branch_coverage=1 00:54:04.801 --rc genhtml_function_coverage=1 00:54:04.801 --rc genhtml_legend=1 00:54:04.801 --rc geninfo_all_blocks=1 00:54:04.801 --rc geninfo_unexecuted_blocks=1 00:54:04.801 00:54:04.801 ' 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:04.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:04.801 --rc genhtml_branch_coverage=1 00:54:04.801 --rc genhtml_function_coverage=1 00:54:04.801 --rc genhtml_legend=1 00:54:04.801 --rc geninfo_all_blocks=1 00:54:04.801 --rc geninfo_unexecuted_blocks=1 00:54:04.801 00:54:04.801 ' 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:04.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:04.801 --rc genhtml_branch_coverage=1 00:54:04.801 --rc genhtml_function_coverage=1 00:54:04.801 --rc genhtml_legend=1 00:54:04.801 --rc geninfo_all_blocks=1 00:54:04.801 --rc geninfo_unexecuted_blocks=1 00:54:04.801 00:54:04.801 ' 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:04.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:04.801 --rc genhtml_branch_coverage=1 00:54:04.801 --rc genhtml_function_coverage=1 00:54:04.801 --rc genhtml_legend=1 00:54:04.801 --rc geninfo_all_blocks=1 00:54:04.801 --rc geninfo_unexecuted_blocks=1 00:54:04.801 00:54:04.801 ' 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:04.801 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:54:04.802 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:54:04.802 Cannot find device "nvmf_init_br" 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:54:04.802 Cannot find device "nvmf_init_br2" 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:54:04.802 Cannot find device "nvmf_tgt_br" 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:54:04.802 Cannot find device "nvmf_tgt_br2" 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:54:04.802 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:54:05.061 Cannot find device "nvmf_init_br" 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:54:05.061 Cannot find device "nvmf_init_br2" 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:54:05.061 Cannot find device "nvmf_tgt_br" 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:54:05.061 Cannot find device "nvmf_tgt_br2" 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:54:05.061 Cannot find device "nvmf_br" 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:54:05.061 Cannot find device "nvmf_init_if" 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:54:05.061 Cannot find device "nvmf_init_if2" 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:05.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:05.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:54:05.061 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:54:05.321 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:05.321 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:54:05.321 00:54:05.321 --- 10.0.0.3 ping statistics --- 00:54:05.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:05.321 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:54:05.321 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:54:05.321 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:54:05.321 00:54:05.321 --- 10.0.0.4 ping statistics --- 00:54:05.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:05.321 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:05.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:05.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:54:05.321 00:54:05.321 --- 10.0.0.1 ping statistics --- 00:54:05.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:05.321 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:54:05.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:05.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:54:05.321 00:54:05.321 --- 10.0.0.2 ping statistics --- 00:54:05.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:05.321 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=64972 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 64972 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64972 ']' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:05.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:05.321 05:52:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:05.321 [2024-12-09 05:52:59.792420] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:54:05.322 [2024-12-09 05:52:59.792512] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:05.581 [2024-12-09 05:52:59.940344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:54:05.581 [2024-12-09 05:52:59.969774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:05.581 [2024-12-09 05:52:59.970116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:05.581 [2024-12-09 05:52:59.970254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:05.581 [2024-12-09 05:52:59.970306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:05.581 [2024-12-09 05:52:59.970401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:05.581 [2024-12-09 05:52:59.971297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:05.581 [2024-12-09 05:52:59.971452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:54:05.581 [2024-12-09 05:52:59.971618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:54:05.581 [2024-12-09 05:52:59.971622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:05.581 [2024-12-09 05:53:00.121378] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:05.581 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:05.840 Malloc0 00:54:05.840 [2024-12-09 05:53:00.188149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:05.840 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:05.840 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:54:05.840 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:05.840 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:05.840 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65025 00:54:05.840 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65025 /var/tmp/bdevperf.sock 00:54:05.840 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65025 ']' 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:54:05.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:05.841 { 00:54:05.841 "params": { 00:54:05.841 "name": "Nvme$subsystem", 00:54:05.841 "trtype": "$TEST_TRANSPORT", 00:54:05.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:05.841 "adrfam": "ipv4", 00:54:05.841 "trsvcid": "$NVMF_PORT", 00:54:05.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:05.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:05.841 "hdgst": ${hdgst:-false}, 00:54:05.841 "ddgst": ${ddgst:-false} 00:54:05.841 }, 00:54:05.841 "method": "bdev_nvme_attach_controller" 00:54:05.841 } 00:54:05.841 EOF 00:54:05.841 )") 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:54:05.841 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:54:05.841 "params": { 00:54:05.841 "name": "Nvme0", 00:54:05.841 "trtype": "tcp", 00:54:05.841 "traddr": "10.0.0.3", 00:54:05.841 "adrfam": "ipv4", 00:54:05.841 "trsvcid": "4420", 00:54:05.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:05.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:05.841 "hdgst": false, 00:54:05.841 "ddgst": false 00:54:05.841 }, 00:54:05.841 "method": "bdev_nvme_attach_controller" 00:54:05.841 }' 00:54:05.841 [2024-12-09 05:53:00.301175] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:54:05.841 [2024-12-09 05:53:00.301284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65025 ] 00:54:06.100 [2024-12-09 05:53:00.454069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:06.100 [2024-12-09 05:53:00.492300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:06.100 Running I/O for 10 seconds... 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:54:06.359 05:53:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:54:06.619 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:54:06.619 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:54:06.619 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:54:06.619 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:54:06.619 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:06.619 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:06.619 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:06.620 [2024-12-09 05:53:01.088945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200be20 is same with the state(6) to be set 00:54:06.620 [2024-12-09 05:53:01.089023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200be20 is same with the state(6) to be set 00:54:06.620 [2024-12-09 05:53:01.089043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200be20 is same with the state(6) to be set 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:06.620 [2024-12-09 05:53:01.095680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:54:06.620 [2024-12-09 05:53:01.095729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.095743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:54:06.620 [2024-12-09 05:53:01.095753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.095763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:54:06.620 [2024-12-09 05:53:01.095773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.095783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:54:06.620 [2024-12-09 05:53:01.095792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.095801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bb130 is same with the state(6) to be set 00:54:06.620 [2024-12-09 05:53:01.101161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:54:06.620 [2024-12-09 05:53:01.101637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.620 [2024-12-09 05:53:01.101806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.620 [2024-12-09 05:53:01.101815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.101825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.101848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.101868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.101888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.101908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 05:53:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:54:06.621 [2024-12-09 05:53:01.101928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.101947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.101968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.101988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.101997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.102557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:54:06.621 [2024-12-09 05:53:01.102566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:54:06.621 [2024-12-09 05:53:01.103757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:54:06.621 task offset: 90112 on job bdev=Nvme0n1 fails 00:54:06.621 00:54:06.621 Latency(us) 00:54:06.621 [2024-12-09T05:53:01.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:06.621 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:54:06.621 Job: Nvme0n1 ended in about 0.46 seconds with error 00:54:06.621 Verification LBA range: start 0x0 length 0x400 00:54:06.621 Nvme0n1 : 0.46 1527.52 95.47 138.87 0.00 36929.42 1839.48 42657.98 00:54:06.621 [2024-12-09T05:53:01.208Z] =================================================================================================================== 00:54:06.622 [2024-12-09T05:53:01.208Z] Total : 1527.52 95.47 138.87 0.00 36929.42 1839.48 42657.98 00:54:06.622 [2024-12-09 05:53:01.105790] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:54:06.622 [2024-12-09 05:53:01.105822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bb130 (9): Bad file descriptor 00:54:06.622 [2024-12-09 05:53:01.110633] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65025 00:54:07.557 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65025) - No such process 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:54:07.557 { 00:54:07.557 "params": { 00:54:07.557 "name": "Nvme$subsystem", 00:54:07.557 "trtype": "$TEST_TRANSPORT", 00:54:07.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:07.557 "adrfam": "ipv4", 00:54:07.557 "trsvcid": "$NVMF_PORT", 00:54:07.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:07.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:07.557 "hdgst": ${hdgst:-false}, 00:54:07.557 "ddgst": ${ddgst:-false} 00:54:07.557 }, 00:54:07.557 "method": "bdev_nvme_attach_controller" 00:54:07.557 } 00:54:07.557 EOF 00:54:07.557 )") 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:54:07.557 05:53:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:54:07.557 "params": { 00:54:07.557 "name": "Nvme0", 00:54:07.557 "trtype": "tcp", 00:54:07.557 "traddr": "10.0.0.3", 00:54:07.557 "adrfam": "ipv4", 00:54:07.557 "trsvcid": "4420", 00:54:07.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:07.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:07.557 "hdgst": false, 00:54:07.557 "ddgst": false 00:54:07.557 }, 00:54:07.557 "method": "bdev_nvme_attach_controller" 00:54:07.557 }' 00:54:07.815 [2024-12-09 05:53:02.161127] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:54:07.815 [2024-12-09 05:53:02.161207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65071 ] 00:54:07.815 [2024-12-09 05:53:02.303556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:07.815 [2024-12-09 05:53:02.333056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:08.073 Running I/O for 1 seconds... 00:54:09.009 1728.00 IOPS, 108.00 MiB/s 00:54:09.009 Latency(us) 00:54:09.009 [2024-12-09T05:53:03.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:09.009 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:54:09.009 Verification LBA range: start 0x0 length 0x400 00:54:09.009 Nvme0n1 : 1.00 1784.72 111.55 0.00 0.00 35179.78 6672.76 32410.53 00:54:09.009 [2024-12-09T05:53:03.595Z] =================================================================================================================== 00:54:09.009 [2024-12-09T05:53:03.595Z] Total : 1784.72 111.55 0.00 0.00 35179.78 6672.76 32410.53 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:09.268 rmmod nvme_tcp 00:54:09.268 rmmod nvme_fabrics 00:54:09.268 rmmod nvme_keyring 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 64972 ']' 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 64972 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 64972 ']' 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 64972 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64972 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:09.268 killing process with pid 64972 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64972' 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 64972 00:54:09.268 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 64972 00:54:09.527 [2024-12-09 05:53:03.881377] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:54:09.527 05:53:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:54:09.527 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:54:09.527 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:54:09.527 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:09.527 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:09.527 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:54:09.527 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:54:09.787 00:54:09.787 real 0m5.041s 00:54:09.787 user 0m18.065s 00:54:09.787 sys 0m1.247s 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:09.787 ************************************ 00:54:09.787 END TEST nvmf_host_management 00:54:09.787 ************************************ 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:54:09.787 ************************************ 00:54:09.787 START TEST nvmf_lvol 00:54:09.787 ************************************ 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:54:09.787 * Looking for test storage... 00:54:09.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:54:09.787 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:10.047 --rc genhtml_branch_coverage=1 00:54:10.047 --rc genhtml_function_coverage=1 00:54:10.047 --rc genhtml_legend=1 00:54:10.047 --rc geninfo_all_blocks=1 00:54:10.047 --rc geninfo_unexecuted_blocks=1 00:54:10.047 00:54:10.047 ' 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:10.047 --rc genhtml_branch_coverage=1 00:54:10.047 --rc genhtml_function_coverage=1 00:54:10.047 --rc genhtml_legend=1 00:54:10.047 --rc geninfo_all_blocks=1 00:54:10.047 --rc geninfo_unexecuted_blocks=1 00:54:10.047 00:54:10.047 ' 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:10.047 --rc genhtml_branch_coverage=1 00:54:10.047 --rc genhtml_function_coverage=1 00:54:10.047 --rc genhtml_legend=1 00:54:10.047 --rc geninfo_all_blocks=1 00:54:10.047 --rc geninfo_unexecuted_blocks=1 00:54:10.047 00:54:10.047 ' 00:54:10.047 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:10.048 --rc genhtml_branch_coverage=1 00:54:10.048 --rc genhtml_function_coverage=1 00:54:10.048 --rc genhtml_legend=1 00:54:10.048 --rc geninfo_all_blocks=1 00:54:10.048 --rc geninfo_unexecuted_blocks=1 00:54:10.048 00:54:10.048 ' 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:54:10.048 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:54:10.048 Cannot find device "nvmf_init_br" 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:54:10.048 Cannot find device "nvmf_init_br2" 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:54:10.048 Cannot find device "nvmf_tgt_br" 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:54:10.048 Cannot find device "nvmf_tgt_br2" 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:54:10.048 Cannot find device "nvmf_init_br" 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:54:10.048 Cannot find device "nvmf_init_br2" 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:54:10.048 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:54:10.049 Cannot find device "nvmf_tgt_br" 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:54:10.049 Cannot find device "nvmf_tgt_br2" 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:54:10.049 Cannot find device "nvmf_br" 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:54:10.049 Cannot find device "nvmf_init_if" 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:54:10.049 Cannot find device "nvmf_init_if2" 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:10.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:10.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:54:10.049 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:54:10.309 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:10.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:54:10.309 00:54:10.309 --- 10.0.0.3 ping statistics --- 00:54:10.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:10.309 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:54:10.309 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:54:10.309 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:54:10.309 00:54:10.309 --- 10.0.0.4 ping statistics --- 00:54:10.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:10.309 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:10.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:10.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:54:10.309 00:54:10.309 --- 10.0.0.1 ping statistics --- 00:54:10.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:10.309 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:54:10.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:10.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:54:10.309 00:54:10.309 --- 10.0.0.2 ping statistics --- 00:54:10.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:10.309 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65345 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65345 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65345 ']' 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:10.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:10.309 05:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:54:10.310 [2024-12-09 05:53:04.851320] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:54:10.310 [2024-12-09 05:53:04.851407] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:10.569 [2024-12-09 05:53:05.003486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:54:10.569 [2024-12-09 05:53:05.042332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:10.569 [2024-12-09 05:53:05.042402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:10.569 [2024-12-09 05:53:05.042417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:10.569 [2024-12-09 05:53:05.042428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:10.569 [2024-12-09 05:53:05.042436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:10.569 [2024-12-09 05:53:05.043346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:10.569 [2024-12-09 05:53:05.043423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:54:10.569 [2024-12-09 05:53:05.043425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:10.569 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:10.569 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:54:10.569 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:10.569 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:10.569 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:54:10.848 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:10.848 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:54:11.105 [2024-12-09 05:53:05.479547] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:11.105 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:54:11.363 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:54:11.363 05:53:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:54:11.622 05:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:54:11.622 05:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:54:11.880 05:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:54:12.139 05:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=015157fb-9464-46f6-8b26-0c098a0fe589 00:54:12.139 05:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 015157fb-9464-46f6-8b26-0c098a0fe589 lvol 20 00:54:12.397 05:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f9b2e7e4-fa5b-48b2-b01d-693d4e15a109 00:54:12.397 05:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:54:12.656 05:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9b2e7e4-fa5b-48b2-b01d-693d4e15a109 00:54:12.915 05:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:13.173 [2024-12-09 05:53:07.592085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:13.173 05:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:54:13.431 05:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:54:13.431 05:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65479 00:54:13.431 05:53:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:54:14.368 05:53:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f9b2e7e4-fa5b-48b2-b01d-693d4e15a109 MY_SNAPSHOT 00:54:14.934 05:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7101afca-06ed-4cbd-94b0-010eebad03c5 00:54:14.934 05:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f9b2e7e4-fa5b-48b2-b01d-693d4e15a109 30 00:54:15.194 05:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 7101afca-06ed-4cbd-94b0-010eebad03c5 MY_CLONE 00:54:15.452 05:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=87eea504-2b24-4079-93e8-c63884a627f9 00:54:15.452 05:53:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 87eea504-2b24-4079-93e8-c63884a627f9 00:54:16.018 05:53:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65479 00:54:24.129 Initializing NVMe Controllers 00:54:24.129 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:54:24.129 Controller IO queue size 128, less than required. 00:54:24.129 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:24.129 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:54:24.129 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:54:24.129 Initialization complete. Launching workers. 00:54:24.129 ======================================================== 00:54:24.129 Latency(us) 00:54:24.129 Device Information : IOPS MiB/s Average min max 00:54:24.129 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11410.50 44.57 11223.05 2057.82 46126.28 00:54:24.129 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11440.30 44.69 11191.37 1984.21 113225.30 00:54:24.129 ======================================================== 00:54:24.129 Total : 22850.80 89.26 11207.19 1984.21 113225.30 00:54:24.129 00:54:24.129 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:24.129 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f9b2e7e4-fa5b-48b2-b01d-693d4e15a109 00:54:24.129 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 015157fb-9464-46f6-8b26-0c098a0fe589 00:54:24.388 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:54:24.388 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:54:24.388 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:54:24.388 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:54:24.388 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:54:24.645 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:54:24.645 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:54:24.645 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:54:24.645 05:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:54:24.645 rmmod nvme_tcp 00:54:24.645 rmmod nvme_fabrics 00:54:24.645 rmmod nvme_keyring 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65345 ']' 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65345 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65345 ']' 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65345 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65345 00:54:24.645 killing process with pid 65345 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65345' 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65345 00:54:24.645 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65345 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:54:24.646 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:24.903 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:54:25.161 ************************************ 00:54:25.161 END TEST nvmf_lvol 00:54:25.161 ************************************ 00:54:25.161 00:54:25.161 real 0m15.299s 00:54:25.161 user 1m3.761s 00:54:25.161 sys 0m3.710s 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:54:25.161 ************************************ 00:54:25.161 START TEST nvmf_lvs_grow 00:54:25.161 ************************************ 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:54:25.161 * Looking for test storage... 00:54:25.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:54:25.161 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:54:25.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:25.162 --rc genhtml_branch_coverage=1 00:54:25.162 --rc genhtml_function_coverage=1 00:54:25.162 --rc genhtml_legend=1 00:54:25.162 --rc geninfo_all_blocks=1 00:54:25.162 --rc geninfo_unexecuted_blocks=1 00:54:25.162 00:54:25.162 ' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:54:25.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:25.162 --rc genhtml_branch_coverage=1 00:54:25.162 --rc genhtml_function_coverage=1 00:54:25.162 --rc genhtml_legend=1 00:54:25.162 --rc geninfo_all_blocks=1 00:54:25.162 --rc geninfo_unexecuted_blocks=1 00:54:25.162 00:54:25.162 ' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:54:25.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:25.162 --rc genhtml_branch_coverage=1 00:54:25.162 --rc genhtml_function_coverage=1 00:54:25.162 --rc genhtml_legend=1 00:54:25.162 --rc geninfo_all_blocks=1 00:54:25.162 --rc geninfo_unexecuted_blocks=1 00:54:25.162 00:54:25.162 ' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:54:25.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:54:25.162 --rc genhtml_branch_coverage=1 00:54:25.162 --rc genhtml_function_coverage=1 00:54:25.162 --rc genhtml_legend=1 00:54:25.162 --rc geninfo_all_blocks=1 00:54:25.162 --rc geninfo_unexecuted_blocks=1 00:54:25.162 00:54:25.162 ' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:54:25.162 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:25.162 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:54:25.419 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:54:25.420 Cannot find device "nvmf_init_br" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:54:25.420 Cannot find device "nvmf_init_br2" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:54:25.420 Cannot find device "nvmf_tgt_br" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:54:25.420 Cannot find device "nvmf_tgt_br2" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:54:25.420 Cannot find device "nvmf_init_br" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:54:25.420 Cannot find device "nvmf_init_br2" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:54:25.420 Cannot find device "nvmf_tgt_br" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:54:25.420 Cannot find device "nvmf_tgt_br2" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:54:25.420 Cannot find device "nvmf_br" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:54:25.420 Cannot find device "nvmf_init_if" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:54:25.420 Cannot find device "nvmf_init_if2" 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:25.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:25.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:54:25.420 05:53:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:54:25.678 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:25.678 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:54:25.678 00:54:25.678 --- 10.0.0.3 ping statistics --- 00:54:25.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:25.678 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:54:25.678 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:54:25.678 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:54:25.678 00:54:25.678 --- 10.0.0.4 ping statistics --- 00:54:25.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:25.678 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:25.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:25.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:54:25.678 00:54:25.678 --- 10.0.0.1 ping statistics --- 00:54:25.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:25.678 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:54:25.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:25.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:54:25.678 00:54:25.678 --- 10.0.0.2 ping statistics --- 00:54:25.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:25.678 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=65903 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 65903 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 65903 ']' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:25.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:25.678 05:53:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:54:25.678 [2024-12-09 05:53:20.168157] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:54:25.678 [2024-12-09 05:53:20.168258] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:25.936 [2024-12-09 05:53:20.310730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:25.936 [2024-12-09 05:53:20.338694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:25.936 [2024-12-09 05:53:20.338736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:25.936 [2024-12-09 05:53:20.338746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:25.936 [2024-12-09 05:53:20.338755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:25.936 [2024-12-09 05:53:20.338761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:25.936 [2024-12-09 05:53:20.339038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:54:26.871 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:26.871 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:54:26.871 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:54:26.871 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:54:26.871 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:54:26.872 [2024-12-09 05:53:21.400464] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:54:26.872 ************************************ 00:54:26.872 START TEST lvs_grow_clean 00:54:26.872 ************************************ 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:54:26.872 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:54:27.439 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:54:27.439 05:53:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:54:27.697 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:27.697 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:27.697 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:54:27.956 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:54:27.956 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:54:27.956 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u da570295-b08e-43ac-a367-1a04cc5e0da2 lvol 150 00:54:28.215 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc 00:54:28.215 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:54:28.215 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:54:28.474 [2024-12-09 05:53:22.817400] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:54:28.474 [2024-12-09 05:53:22.817488] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:54:28.474 true 00:54:28.474 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:28.474 05:53:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:54:28.733 05:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:54:28.733 05:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:54:28.993 05:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc 00:54:29.252 05:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:29.252 [2024-12-09 05:53:23.799397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:29.252 05:53:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66070 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66070 /var/tmp/bdevperf.sock 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66070 ']' 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:29.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:54:29.820 [2024-12-09 05:53:24.165571] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:54:29.820 [2024-12-09 05:53:24.165673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66070 ] 00:54:29.820 [2024-12-09 05:53:24.299782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:29.820 [2024-12-09 05:53:24.328182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:54:29.820 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:54:30.388 Nvme0n1 00:54:30.388 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:54:30.388 [ 00:54:30.388 { 00:54:30.388 "aliases": [ 00:54:30.388 "80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc" 00:54:30.388 ], 00:54:30.388 "assigned_rate_limits": { 00:54:30.388 "r_mbytes_per_sec": 0, 00:54:30.388 "rw_ios_per_sec": 0, 00:54:30.388 "rw_mbytes_per_sec": 0, 00:54:30.388 "w_mbytes_per_sec": 0 00:54:30.388 }, 00:54:30.388 "block_size": 4096, 00:54:30.388 "claimed": false, 00:54:30.388 "driver_specific": { 00:54:30.388 "mp_policy": "active_passive", 00:54:30.388 "nvme": [ 00:54:30.388 { 00:54:30.388 "ctrlr_data": { 00:54:30.388 "ana_reporting": false, 00:54:30.388 "cntlid": 1, 00:54:30.388 "firmware_revision": "25.01", 00:54:30.388 "model_number": "SPDK bdev Controller", 00:54:30.388 "multi_ctrlr": true, 00:54:30.388 "oacs": { 00:54:30.388 "firmware": 0, 00:54:30.388 "format": 0, 00:54:30.388 "ns_manage": 0, 00:54:30.388 "security": 0 00:54:30.388 }, 00:54:30.388 "serial_number": "SPDK0", 00:54:30.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:30.388 "vendor_id": "0x8086" 00:54:30.388 }, 00:54:30.388 "ns_data": { 00:54:30.388 "can_share": true, 00:54:30.388 "id": 1 00:54:30.388 }, 00:54:30.388 "trid": { 00:54:30.388 "adrfam": "IPv4", 00:54:30.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:30.388 "traddr": "10.0.0.3", 00:54:30.388 "trsvcid": "4420", 00:54:30.388 "trtype": "TCP" 00:54:30.388 }, 00:54:30.388 "vs": { 00:54:30.388 "nvme_version": "1.3" 00:54:30.388 } 00:54:30.388 } 00:54:30.388 ] 00:54:30.388 }, 00:54:30.388 "memory_domains": [ 00:54:30.388 { 00:54:30.388 "dma_device_id": "system", 00:54:30.388 "dma_device_type": 1 00:54:30.388 } 00:54:30.388 ], 00:54:30.388 "name": "Nvme0n1", 00:54:30.388 "num_blocks": 38912, 00:54:30.388 "numa_id": -1, 00:54:30.388 "product_name": "NVMe disk", 00:54:30.388 "supported_io_types": { 00:54:30.388 "abort": true, 00:54:30.388 "compare": true, 00:54:30.388 "compare_and_write": true, 00:54:30.388 "copy": true, 00:54:30.388 "flush": true, 00:54:30.388 "get_zone_info": false, 00:54:30.388 "nvme_admin": true, 00:54:30.388 "nvme_io": true, 00:54:30.388 "nvme_io_md": false, 00:54:30.388 "nvme_iov_md": false, 00:54:30.388 "read": true, 00:54:30.388 "reset": true, 00:54:30.388 "seek_data": false, 00:54:30.388 "seek_hole": false, 00:54:30.388 "unmap": true, 00:54:30.388 "write": true, 00:54:30.388 "write_zeroes": true, 00:54:30.388 "zcopy": false, 00:54:30.388 "zone_append": false, 00:54:30.388 "zone_management": false 00:54:30.388 }, 00:54:30.388 "uuid": "80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc", 00:54:30.388 "zoned": false 00:54:30.388 } 00:54:30.388 ] 00:54:30.648 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66098 00:54:30.648 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:54:30.648 05:53:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:54:30.648 Running I/O for 10 seconds... 00:54:31.597 Latency(us) 00:54:31.597 [2024-12-09T05:53:26.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:31.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:31.597 Nvme0n1 : 1.00 7071.00 27.62 0.00 0.00 0.00 0.00 0.00 00:54:31.597 [2024-12-09T05:53:26.183Z] =================================================================================================================== 00:54:31.597 [2024-12-09T05:53:26.183Z] Total : 7071.00 27.62 0.00 0.00 0.00 0.00 0.00 00:54:31.597 00:54:32.589 05:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:32.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:32.589 Nvme0n1 : 2.00 7117.50 27.80 0.00 0.00 0.00 0.00 0.00 00:54:32.589 [2024-12-09T05:53:27.175Z] =================================================================================================================== 00:54:32.589 [2024-12-09T05:53:27.175Z] Total : 7117.50 27.80 0.00 0.00 0.00 0.00 0.00 00:54:32.589 00:54:32.847 true 00:54:32.847 05:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:32.847 05:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:54:33.107 05:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:54:33.107 05:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:54:33.107 05:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66098 00:54:33.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:33.676 Nvme0n1 : 3.00 7174.67 28.03 0.00 0.00 0.00 0.00 0.00 00:54:33.676 [2024-12-09T05:53:28.262Z] =================================================================================================================== 00:54:33.676 [2024-12-09T05:53:28.262Z] Total : 7174.67 28.03 0.00 0.00 0.00 0.00 0.00 00:54:33.676 00:54:34.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:34.614 Nvme0n1 : 4.00 7167.00 28.00 0.00 0.00 0.00 0.00 0.00 00:54:34.614 [2024-12-09T05:53:29.200Z] =================================================================================================================== 00:54:34.614 [2024-12-09T05:53:29.200Z] Total : 7167.00 28.00 0.00 0.00 0.00 0.00 0.00 00:54:34.614 00:54:35.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:35.551 Nvme0n1 : 5.00 7066.60 27.60 0.00 0.00 0.00 0.00 0.00 00:54:35.551 [2024-12-09T05:53:30.137Z] =================================================================================================================== 00:54:35.551 [2024-12-09T05:53:30.137Z] Total : 7066.60 27.60 0.00 0.00 0.00 0.00 0.00 00:54:35.551 00:54:36.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:36.931 Nvme0n1 : 6.00 7055.83 27.56 0.00 0.00 0.00 0.00 0.00 00:54:36.931 [2024-12-09T05:53:31.517Z] =================================================================================================================== 00:54:36.931 [2024-12-09T05:53:31.517Z] Total : 7055.83 27.56 0.00 0.00 0.00 0.00 0.00 00:54:36.931 00:54:37.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:37.867 Nvme0n1 : 7.00 7036.57 27.49 0.00 0.00 0.00 0.00 0.00 00:54:37.867 [2024-12-09T05:53:32.453Z] =================================================================================================================== 00:54:37.867 [2024-12-09T05:53:32.453Z] Total : 7036.57 27.49 0.00 0.00 0.00 0.00 0.00 00:54:37.867 00:54:38.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:38.803 Nvme0n1 : 8.00 7014.25 27.40 0.00 0.00 0.00 0.00 0.00 00:54:38.803 [2024-12-09T05:53:33.389Z] =================================================================================================================== 00:54:38.803 [2024-12-09T05:53:33.389Z] Total : 7014.25 27.40 0.00 0.00 0.00 0.00 0.00 00:54:38.803 00:54:39.739 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:39.739 Nvme0n1 : 9.00 6998.78 27.34 0.00 0.00 0.00 0.00 0.00 00:54:39.739 [2024-12-09T05:53:34.325Z] =================================================================================================================== 00:54:39.739 [2024-12-09T05:53:34.325Z] Total : 6998.78 27.34 0.00 0.00 0.00 0.00 0.00 00:54:39.739 00:54:40.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:40.676 Nvme0n1 : 10.00 6986.30 27.29 0.00 0.00 0.00 0.00 0.00 00:54:40.676 [2024-12-09T05:53:35.262Z] =================================================================================================================== 00:54:40.676 [2024-12-09T05:53:35.262Z] Total : 6986.30 27.29 0.00 0.00 0.00 0.00 0.00 00:54:40.676 00:54:40.676 00:54:40.676 Latency(us) 00:54:40.676 [2024-12-09T05:53:35.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:40.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:40.676 Nvme0n1 : 10.01 6989.82 27.30 0.00 0.00 18306.49 5213.09 97231.59 00:54:40.676 [2024-12-09T05:53:35.262Z] =================================================================================================================== 00:54:40.676 [2024-12-09T05:53:35.262Z] Total : 6989.82 27.30 0.00 0.00 18306.49 5213.09 97231.59 00:54:40.676 { 00:54:40.676 "results": [ 00:54:40.676 { 00:54:40.676 "job": "Nvme0n1", 00:54:40.676 "core_mask": "0x2", 00:54:40.676 "workload": "randwrite", 00:54:40.676 "status": "finished", 00:54:40.676 "queue_depth": 128, 00:54:40.676 "io_size": 4096, 00:54:40.676 "runtime": 10.013283, 00:54:40.676 "iops": 6989.815428166766, 00:54:40.676 "mibps": 27.30396651627643, 00:54:40.676 "io_failed": 0, 00:54:40.676 "io_timeout": 0, 00:54:40.676 "avg_latency_us": 18306.49465840413, 00:54:40.676 "min_latency_us": 5213.090909090909, 00:54:40.676 "max_latency_us": 97231.59272727273 00:54:40.676 } 00:54:40.676 ], 00:54:40.676 "core_count": 1 00:54:40.676 } 00:54:40.676 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66070 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66070 ']' 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66070 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66070 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:40.677 killing process with pid 66070 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66070' 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66070 00:54:40.677 Received shutdown signal, test time was about 10.000000 seconds 00:54:40.677 00:54:40.677 Latency(us) 00:54:40.677 [2024-12-09T05:53:35.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:40.677 [2024-12-09T05:53:35.263Z] =================================================================================================================== 00:54:40.677 [2024-12-09T05:53:35.263Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:40.677 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66070 00:54:40.936 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:54:40.936 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:41.503 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:41.503 05:53:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:54:41.503 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:54:41.503 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:54:41.503 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:54:41.762 [2024-12-09 05:53:36.263451] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:54:41.762 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:42.020 2024/12/09 05:53:36 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:da570295-b08e-43ac-a367-1a04cc5e0da2], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:54:42.020 request: 00:54:42.020 { 00:54:42.020 "method": "bdev_lvol_get_lvstores", 00:54:42.020 "params": { 00:54:42.020 "uuid": "da570295-b08e-43ac-a367-1a04cc5e0da2" 00:54:42.020 } 00:54:42.020 } 00:54:42.020 Got JSON-RPC error response 00:54:42.020 GoRPCClient: error on JSON-RPC call 00:54:42.020 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:54:42.020 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:54:42.020 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:54:42.020 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:54:42.020 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:54:42.279 aio_bdev 00:54:42.279 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc 00:54:42.279 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc 00:54:42.279 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:54:42.279 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:54:42.279 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:54:42.279 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:54:42.279 05:53:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:54:42.538 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc -t 2000 00:54:42.798 [ 00:54:42.798 { 00:54:42.798 "aliases": [ 00:54:42.798 "lvs/lvol" 00:54:42.798 ], 00:54:42.798 "assigned_rate_limits": { 00:54:42.798 "r_mbytes_per_sec": 0, 00:54:42.798 "rw_ios_per_sec": 0, 00:54:42.798 "rw_mbytes_per_sec": 0, 00:54:42.798 "w_mbytes_per_sec": 0 00:54:42.798 }, 00:54:42.798 "block_size": 4096, 00:54:42.798 "claimed": false, 00:54:42.798 "driver_specific": { 00:54:42.798 "lvol": { 00:54:42.798 "base_bdev": "aio_bdev", 00:54:42.798 "clone": false, 00:54:42.798 "esnap_clone": false, 00:54:42.798 "lvol_store_uuid": "da570295-b08e-43ac-a367-1a04cc5e0da2", 00:54:42.798 "num_allocated_clusters": 38, 00:54:42.798 "snapshot": false, 00:54:42.798 "thin_provision": false 00:54:42.798 } 00:54:42.798 }, 00:54:42.798 "name": "80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc", 00:54:42.798 "num_blocks": 38912, 00:54:42.798 "product_name": "Logical Volume", 00:54:42.798 "supported_io_types": { 00:54:42.798 "abort": false, 00:54:42.798 "compare": false, 00:54:42.798 "compare_and_write": false, 00:54:42.798 "copy": false, 00:54:42.798 "flush": false, 00:54:42.798 "get_zone_info": false, 00:54:42.798 "nvme_admin": false, 00:54:42.798 "nvme_io": false, 00:54:42.798 "nvme_io_md": false, 00:54:42.798 "nvme_iov_md": false, 00:54:42.798 "read": true, 00:54:42.798 "reset": true, 00:54:42.798 "seek_data": true, 00:54:42.798 "seek_hole": true, 00:54:42.798 "unmap": true, 00:54:42.798 "write": true, 00:54:42.798 "write_zeroes": true, 00:54:42.798 "zcopy": false, 00:54:42.798 "zone_append": false, 00:54:42.798 "zone_management": false 00:54:42.798 }, 00:54:42.798 "uuid": "80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc", 00:54:42.798 "zoned": false 00:54:42.798 } 00:54:42.798 ] 00:54:42.798 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:54:42.798 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:54:42.798 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:43.057 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:54:43.057 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:43.057 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:54:43.315 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:54:43.315 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 80bf9e17-a9c5-41ee-8b78-8d3aaf55e0bc 00:54:43.574 05:53:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da570295-b08e-43ac-a367-1a04cc5e0da2 00:54:43.833 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:54:44.092 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:54:44.350 ************************************ 00:54:44.350 END TEST lvs_grow_clean 00:54:44.350 ************************************ 00:54:44.350 00:54:44.350 real 0m17.392s 00:54:44.350 user 0m16.760s 00:54:44.350 sys 0m2.012s 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:54:44.350 ************************************ 00:54:44.350 START TEST lvs_grow_dirty 00:54:44.350 ************************************ 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:54:44.350 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:54:44.351 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:54:44.351 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:54:44.351 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:54:44.351 05:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:54:44.609 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:54:44.609 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:54:44.868 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dacbed6d-c14c-41f9-beaa-43216d4170fd 00:54:44.868 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:54:44.868 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:54:45.128 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:54:45.128 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:54:45.387 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dacbed6d-c14c-41f9-beaa-43216d4170fd lvol 150 00:54:45.646 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe 00:54:45.646 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:54:45.646 05:53:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:54:45.904 [2024-12-09 05:53:40.265592] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:54:45.904 [2024-12-09 05:53:40.265728] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:54:45.904 true 00:54:45.904 05:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:54:45.904 05:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:54:46.163 05:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:54:46.163 05:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:54:46.423 05:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe 00:54:46.682 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:54:46.941 [2024-12-09 05:53:41.330173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:54:46.941 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:54:47.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66500 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66500 /var/tmp/bdevperf.sock 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66500 ']' 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:47.201 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:54:47.201 [2024-12-09 05:53:41.616979] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:54:47.201 [2024-12-09 05:53:41.617076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66500 ] 00:54:47.201 [2024-12-09 05:53:41.757651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:47.461 [2024-12-09 05:53:41.788371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:54:47.461 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:54:47.461 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:54:47.461 05:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:54:47.720 Nvme0n1 00:54:47.720 05:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:54:47.979 [ 00:54:47.979 { 00:54:47.979 "aliases": [ 00:54:47.980 "f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe" 00:54:47.980 ], 00:54:47.980 "assigned_rate_limits": { 00:54:47.980 "r_mbytes_per_sec": 0, 00:54:47.980 "rw_ios_per_sec": 0, 00:54:47.980 "rw_mbytes_per_sec": 0, 00:54:47.980 "w_mbytes_per_sec": 0 00:54:47.980 }, 00:54:47.980 "block_size": 4096, 00:54:47.980 "claimed": false, 00:54:47.980 "driver_specific": { 00:54:47.980 "mp_policy": "active_passive", 00:54:47.980 "nvme": [ 00:54:47.980 { 00:54:47.980 "ctrlr_data": { 00:54:47.980 "ana_reporting": false, 00:54:47.980 "cntlid": 1, 00:54:47.980 "firmware_revision": "25.01", 00:54:47.980 "model_number": "SPDK bdev Controller", 00:54:47.980 "multi_ctrlr": true, 00:54:47.980 "oacs": { 00:54:47.980 "firmware": 0, 00:54:47.980 "format": 0, 00:54:47.980 "ns_manage": 0, 00:54:47.980 "security": 0 00:54:47.980 }, 00:54:47.980 "serial_number": "SPDK0", 00:54:47.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:47.980 "vendor_id": "0x8086" 00:54:47.980 }, 00:54:47.980 "ns_data": { 00:54:47.980 "can_share": true, 00:54:47.980 "id": 1 00:54:47.980 }, 00:54:47.980 "trid": { 00:54:47.980 "adrfam": "IPv4", 00:54:47.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:47.980 "traddr": "10.0.0.3", 00:54:47.980 "trsvcid": "4420", 00:54:47.980 "trtype": "TCP" 00:54:47.980 }, 00:54:47.980 "vs": { 00:54:47.980 "nvme_version": "1.3" 00:54:47.980 } 00:54:47.980 } 00:54:47.980 ] 00:54:47.980 }, 00:54:47.980 "memory_domains": [ 00:54:47.980 { 00:54:47.980 "dma_device_id": "system", 00:54:47.980 "dma_device_type": 1 00:54:47.980 } 00:54:47.980 ], 00:54:47.980 "name": "Nvme0n1", 00:54:47.980 "num_blocks": 38912, 00:54:47.980 "numa_id": -1, 00:54:47.980 "product_name": "NVMe disk", 00:54:47.980 "supported_io_types": { 00:54:47.980 "abort": true, 00:54:47.980 "compare": true, 00:54:47.980 "compare_and_write": true, 00:54:47.980 "copy": true, 00:54:47.980 "flush": true, 00:54:47.980 "get_zone_info": false, 00:54:47.980 "nvme_admin": true, 00:54:47.980 "nvme_io": true, 00:54:47.980 "nvme_io_md": false, 00:54:47.980 "nvme_iov_md": false, 00:54:47.980 "read": true, 00:54:47.980 "reset": true, 00:54:47.980 "seek_data": false, 00:54:47.980 "seek_hole": false, 00:54:47.980 "unmap": true, 00:54:47.980 "write": true, 00:54:47.980 "write_zeroes": true, 00:54:47.980 "zcopy": false, 00:54:47.980 "zone_append": false, 00:54:47.980 "zone_management": false 00:54:47.980 }, 00:54:47.980 "uuid": "f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe", 00:54:47.980 "zoned": false 00:54:47.980 } 00:54:47.980 ] 00:54:47.980 05:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:54:47.980 05:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66527 00:54:47.980 05:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:54:47.980 Running I/O for 10 seconds... 00:54:49.359 Latency(us) 00:54:49.359 [2024-12-09T05:53:43.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:49.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:49.359 Nvme0n1 : 1.00 7341.00 28.68 0.00 0.00 0.00 0.00 0.00 00:54:49.359 [2024-12-09T05:53:43.945Z] =================================================================================================================== 00:54:49.359 [2024-12-09T05:53:43.945Z] Total : 7341.00 28.68 0.00 0.00 0.00 0.00 0.00 00:54:49.359 00:54:49.926 05:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:54:50.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:50.186 Nvme0n1 : 2.00 7336.50 28.66 0.00 0.00 0.00 0.00 0.00 00:54:50.186 [2024-12-09T05:53:44.772Z] =================================================================================================================== 00:54:50.186 [2024-12-09T05:53:44.772Z] Total : 7336.50 28.66 0.00 0.00 0.00 0.00 0.00 00:54:50.186 00:54:50.444 true 00:54:50.444 05:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:54:50.444 05:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:54:50.702 05:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:54:50.702 05:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:54:50.702 05:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66527 00:54:51.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:51.269 Nvme0n1 : 3.00 7287.00 28.46 0.00 0.00 0.00 0.00 0.00 00:54:51.269 [2024-12-09T05:53:45.855Z] =================================================================================================================== 00:54:51.269 [2024-12-09T05:53:45.855Z] Total : 7287.00 28.46 0.00 0.00 0.00 0.00 0.00 00:54:51.269 00:54:52.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:52.205 Nvme0n1 : 4.00 7247.25 28.31 0.00 0.00 0.00 0.00 0.00 00:54:52.205 [2024-12-09T05:53:46.791Z] =================================================================================================================== 00:54:52.205 [2024-12-09T05:53:46.791Z] Total : 7247.25 28.31 0.00 0.00 0.00 0.00 0.00 00:54:52.205 00:54:53.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:53.142 Nvme0n1 : 5.00 7187.80 28.08 0.00 0.00 0.00 0.00 0.00 00:54:53.142 [2024-12-09T05:53:47.728Z] =================================================================================================================== 00:54:53.142 [2024-12-09T05:53:47.728Z] Total : 7187.80 28.08 0.00 0.00 0.00 0.00 0.00 00:54:53.142 00:54:54.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:54.080 Nvme0n1 : 6.00 7153.33 27.94 0.00 0.00 0.00 0.00 0.00 00:54:54.080 [2024-12-09T05:53:48.666Z] =================================================================================================================== 00:54:54.080 [2024-12-09T05:53:48.666Z] Total : 7153.33 27.94 0.00 0.00 0.00 0.00 0.00 00:54:54.080 00:54:55.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:55.018 Nvme0n1 : 7.00 7111.71 27.78 0.00 0.00 0.00 0.00 0.00 00:54:55.018 [2024-12-09T05:53:49.604Z] =================================================================================================================== 00:54:55.018 [2024-12-09T05:53:49.604Z] Total : 7111.71 27.78 0.00 0.00 0.00 0.00 0.00 00:54:55.018 00:54:56.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:56.412 Nvme0n1 : 8.00 7088.50 27.69 0.00 0.00 0.00 0.00 0.00 00:54:56.412 [2024-12-09T05:53:50.998Z] =================================================================================================================== 00:54:56.412 [2024-12-09T05:53:50.998Z] Total : 7088.50 27.69 0.00 0.00 0.00 0.00 0.00 00:54:56.412 00:54:57.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:57.348 Nvme0n1 : 9.00 6868.33 26.83 0.00 0.00 0.00 0.00 0.00 00:54:57.348 [2024-12-09T05:53:51.934Z] =================================================================================================================== 00:54:57.348 [2024-12-09T05:53:51.934Z] Total : 6868.33 26.83 0.00 0.00 0.00 0.00 0.00 00:54:57.348 00:54:58.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:58.289 Nvme0n1 : 10.00 6854.00 26.77 0.00 0.00 0.00 0.00 0.00 00:54:58.289 [2024-12-09T05:53:52.875Z] =================================================================================================================== 00:54:58.289 [2024-12-09T05:53:52.875Z] Total : 6854.00 26.77 0.00 0.00 0.00 0.00 0.00 00:54:58.289 00:54:58.289 00:54:58.289 Latency(us) 00:54:58.289 [2024-12-09T05:53:52.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:58.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:54:58.289 Nvme0n1 : 10.01 6857.21 26.79 0.00 0.00 18653.37 8460.10 253564.74 00:54:58.289 [2024-12-09T05:53:52.875Z] =================================================================================================================== 00:54:58.289 [2024-12-09T05:53:52.875Z] Total : 6857.21 26.79 0.00 0.00 18653.37 8460.10 253564.74 00:54:58.289 { 00:54:58.289 "results": [ 00:54:58.289 { 00:54:58.289 "job": "Nvme0n1", 00:54:58.290 "core_mask": "0x2", 00:54:58.290 "workload": "randwrite", 00:54:58.290 "status": "finished", 00:54:58.290 "queue_depth": 128, 00:54:58.290 "io_size": 4096, 00:54:58.290 "runtime": 10.013983, 00:54:58.290 "iops": 6857.21156107415, 00:54:58.290 "mibps": 26.7859826604459, 00:54:58.290 "io_failed": 0, 00:54:58.290 "io_timeout": 0, 00:54:58.290 "avg_latency_us": 18653.373619788494, 00:54:58.290 "min_latency_us": 8460.101818181818, 00:54:58.290 "max_latency_us": 253564.74181818182 00:54:58.290 } 00:54:58.290 ], 00:54:58.290 "core_count": 1 00:54:58.290 } 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66500 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66500 ']' 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66500 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66500 00:54:58.290 killing process with pid 66500 00:54:58.290 Received shutdown signal, test time was about 10.000000 seconds 00:54:58.290 00:54:58.290 Latency(us) 00:54:58.290 [2024-12-09T05:53:52.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:54:58.290 [2024-12-09T05:53:52.876Z] =================================================================================================================== 00:54:58.290 [2024-12-09T05:53:52.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66500' 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66500 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66500 00:54:58.290 05:53:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:54:58.606 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:58.890 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:54:58.890 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65903 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65903 00:54:59.149 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65903 Killed "${NVMF_APP[@]}" "$@" 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66691 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66691 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66691 ']' 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:54:59.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:54:59.149 05:53:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:54:59.149 [2024-12-09 05:53:53.658946] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:54:59.149 [2024-12-09 05:53:53.659032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:59.408 [2024-12-09 05:53:53.803864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:59.408 [2024-12-09 05:53:53.832628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:59.408 [2024-12-09 05:53:53.832716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:59.408 [2024-12-09 05:53:53.832728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:59.408 [2024-12-09 05:53:53.832736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:59.408 [2024-12-09 05:53:53.832745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:59.408 [2024-12-09 05:53:53.833094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:00.343 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:00.343 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:55:00.343 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:00.343 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:00.343 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:55:00.343 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:00.343 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:55:00.601 [2024-12-09 05:53:54.934976] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:55:00.601 [2024-12-09 05:53:54.935492] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:55:00.601 [2024-12-09 05:53:54.935863] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:55:00.601 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:55:00.601 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe 00:55:00.601 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe 00:55:00.601 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:55:00.601 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:55:00.601 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:55:00.601 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:55:00.601 05:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:55:00.858 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe -t 2000 00:55:01.115 [ 00:55:01.116 { 00:55:01.116 "aliases": [ 00:55:01.116 "lvs/lvol" 00:55:01.116 ], 00:55:01.116 "assigned_rate_limits": { 00:55:01.116 "r_mbytes_per_sec": 0, 00:55:01.116 "rw_ios_per_sec": 0, 00:55:01.116 "rw_mbytes_per_sec": 0, 00:55:01.116 "w_mbytes_per_sec": 0 00:55:01.116 }, 00:55:01.116 "block_size": 4096, 00:55:01.116 "claimed": false, 00:55:01.116 "driver_specific": { 00:55:01.116 "lvol": { 00:55:01.116 "base_bdev": "aio_bdev", 00:55:01.116 "clone": false, 00:55:01.116 "esnap_clone": false, 00:55:01.116 "lvol_store_uuid": "dacbed6d-c14c-41f9-beaa-43216d4170fd", 00:55:01.116 "num_allocated_clusters": 38, 00:55:01.116 "snapshot": false, 00:55:01.116 "thin_provision": false 00:55:01.116 } 00:55:01.116 }, 00:55:01.116 "name": "f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe", 00:55:01.116 "num_blocks": 38912, 00:55:01.116 "product_name": "Logical Volume", 00:55:01.116 "supported_io_types": { 00:55:01.116 "abort": false, 00:55:01.116 "compare": false, 00:55:01.116 "compare_and_write": false, 00:55:01.116 "copy": false, 00:55:01.116 "flush": false, 00:55:01.116 "get_zone_info": false, 00:55:01.116 "nvme_admin": false, 00:55:01.116 "nvme_io": false, 00:55:01.116 "nvme_io_md": false, 00:55:01.116 "nvme_iov_md": false, 00:55:01.116 "read": true, 00:55:01.116 "reset": true, 00:55:01.116 "seek_data": true, 00:55:01.116 "seek_hole": true, 00:55:01.116 "unmap": true, 00:55:01.116 "write": true, 00:55:01.116 "write_zeroes": true, 00:55:01.116 "zcopy": false, 00:55:01.116 "zone_append": false, 00:55:01.116 "zone_management": false 00:55:01.116 }, 00:55:01.116 "uuid": "f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe", 00:55:01.116 "zoned": false 00:55:01.116 } 00:55:01.116 ] 00:55:01.116 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:55:01.116 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:55:01.116 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:55:01.373 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:55:01.373 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:55:01.373 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:55:01.631 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:55:01.631 05:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:55:01.631 [2024-12-09 05:53:56.200948] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:55:01.890 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:55:02.149 2024/12/09 05:53:56 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:dacbed6d-c14c-41f9-beaa-43216d4170fd], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:55:02.149 request: 00:55:02.149 { 00:55:02.149 "method": "bdev_lvol_get_lvstores", 00:55:02.149 "params": { 00:55:02.149 "uuid": "dacbed6d-c14c-41f9-beaa-43216d4170fd" 00:55:02.149 } 00:55:02.149 } 00:55:02.149 Got JSON-RPC error response 00:55:02.149 GoRPCClient: error on JSON-RPC call 00:55:02.149 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:55:02.149 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:55:02.149 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:55:02.149 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:55:02.149 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:55:02.408 aio_bdev 00:55:02.408 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe 00:55:02.408 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe 00:55:02.408 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:55:02.408 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:55:02.408 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:55:02.408 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:55:02.408 05:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:55:02.666 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe -t 2000 00:55:02.666 [ 00:55:02.666 { 00:55:02.666 "aliases": [ 00:55:02.666 "lvs/lvol" 00:55:02.666 ], 00:55:02.666 "assigned_rate_limits": { 00:55:02.666 "r_mbytes_per_sec": 0, 00:55:02.666 "rw_ios_per_sec": 0, 00:55:02.666 "rw_mbytes_per_sec": 0, 00:55:02.666 "w_mbytes_per_sec": 0 00:55:02.666 }, 00:55:02.666 "block_size": 4096, 00:55:02.666 "claimed": false, 00:55:02.666 "driver_specific": { 00:55:02.666 "lvol": { 00:55:02.666 "base_bdev": "aio_bdev", 00:55:02.666 "clone": false, 00:55:02.666 "esnap_clone": false, 00:55:02.666 "lvol_store_uuid": "dacbed6d-c14c-41f9-beaa-43216d4170fd", 00:55:02.666 "num_allocated_clusters": 38, 00:55:02.666 "snapshot": false, 00:55:02.666 "thin_provision": false 00:55:02.666 } 00:55:02.666 }, 00:55:02.666 "name": "f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe", 00:55:02.666 "num_blocks": 38912, 00:55:02.666 "product_name": "Logical Volume", 00:55:02.666 "supported_io_types": { 00:55:02.666 "abort": false, 00:55:02.666 "compare": false, 00:55:02.666 "compare_and_write": false, 00:55:02.666 "copy": false, 00:55:02.666 "flush": false, 00:55:02.666 "get_zone_info": false, 00:55:02.666 "nvme_admin": false, 00:55:02.666 "nvme_io": false, 00:55:02.666 "nvme_io_md": false, 00:55:02.666 "nvme_iov_md": false, 00:55:02.666 "read": true, 00:55:02.666 "reset": true, 00:55:02.666 "seek_data": true, 00:55:02.666 "seek_hole": true, 00:55:02.666 "unmap": true, 00:55:02.666 "write": true, 00:55:02.666 "write_zeroes": true, 00:55:02.666 "zcopy": false, 00:55:02.666 "zone_append": false, 00:55:02.666 "zone_management": false 00:55:02.666 }, 00:55:02.666 "uuid": "f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe", 00:55:02.666 "zoned": false 00:55:02.666 } 00:55:02.666 ] 00:55:02.666 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:55:02.666 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:55:02.666 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:55:02.924 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:55:02.924 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:55:02.924 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:55:03.182 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:55:03.182 05:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f33835c8-89c8-4fc5-88a0-c0c2dddd2cfe 00:55:03.439 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dacbed6d-c14c-41f9-beaa-43216d4170fd 00:55:03.696 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:55:03.953 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:55:04.516 ************************************ 00:55:04.516 END TEST lvs_grow_dirty 00:55:04.516 ************************************ 00:55:04.516 00:55:04.516 real 0m19.994s 00:55:04.516 user 0m38.586s 00:55:04.516 sys 0m9.316s 00:55:04.516 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:04.516 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:55:04.516 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:55:04.517 nvmf_trace.0 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:04.517 05:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:05.083 rmmod nvme_tcp 00:55:05.083 rmmod nvme_fabrics 00:55:05.083 rmmod nvme_keyring 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66691 ']' 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66691 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66691 ']' 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66691 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66691 00:55:05.083 killing process with pid 66691 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66691' 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66691 00:55:05.083 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66691 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:05.343 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:55:05.603 00:55:05.603 real 0m40.408s 00:55:05.603 user 1m2.165s 00:55:05.603 sys 0m12.503s 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:55:05.603 ************************************ 00:55:05.603 END TEST nvmf_lvs_grow 00:55:05.603 ************************************ 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:05.603 05:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:55:05.603 ************************************ 00:55:05.603 START TEST nvmf_bdev_io_wait 00:55:05.603 ************************************ 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:55:05.603 * Looking for test storage... 00:55:05.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:55:05.603 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:05.863 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:05.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:05.863 --rc genhtml_branch_coverage=1 00:55:05.863 --rc genhtml_function_coverage=1 00:55:05.863 --rc genhtml_legend=1 00:55:05.863 --rc geninfo_all_blocks=1 00:55:05.863 --rc geninfo_unexecuted_blocks=1 00:55:05.863 00:55:05.863 ' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:05.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:05.864 --rc genhtml_branch_coverage=1 00:55:05.864 --rc genhtml_function_coverage=1 00:55:05.864 --rc genhtml_legend=1 00:55:05.864 --rc geninfo_all_blocks=1 00:55:05.864 --rc geninfo_unexecuted_blocks=1 00:55:05.864 00:55:05.864 ' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:05.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:05.864 --rc genhtml_branch_coverage=1 00:55:05.864 --rc genhtml_function_coverage=1 00:55:05.864 --rc genhtml_legend=1 00:55:05.864 --rc geninfo_all_blocks=1 00:55:05.864 --rc geninfo_unexecuted_blocks=1 00:55:05.864 00:55:05.864 ' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:05.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:05.864 --rc genhtml_branch_coverage=1 00:55:05.864 --rc genhtml_function_coverage=1 00:55:05.864 --rc genhtml_legend=1 00:55:05.864 --rc geninfo_all_blocks=1 00:55:05.864 --rc geninfo_unexecuted_blocks=1 00:55:05.864 00:55:05.864 ' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:05.864 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:05.864 Cannot find device "nvmf_init_br" 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:05.864 Cannot find device "nvmf_init_br2" 00:55:05.864 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:05.865 Cannot find device "nvmf_tgt_br" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:05.865 Cannot find device "nvmf_tgt_br2" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:05.865 Cannot find device "nvmf_init_br" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:05.865 Cannot find device "nvmf_init_br2" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:05.865 Cannot find device "nvmf_tgt_br" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:05.865 Cannot find device "nvmf_tgt_br2" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:05.865 Cannot find device "nvmf_br" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:05.865 Cannot find device "nvmf_init_if" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:05.865 Cannot find device "nvmf_init_if2" 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:05.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:05.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:05.865 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:06.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:06.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:55:06.123 00:55:06.123 --- 10.0.0.3 ping statistics --- 00:55:06.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:06.123 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:55:06.123 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:06.123 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:06.123 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:55:06.124 00:55:06.124 --- 10.0.0.4 ping statistics --- 00:55:06.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:06.124 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:06.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:06.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:55:06.124 00:55:06.124 --- 10.0.0.1 ping statistics --- 00:55:06.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:06.124 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:06.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:06.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:55:06.124 00:55:06.124 --- 10.0.0.2 ping statistics --- 00:55:06.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:06.124 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67165 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67165 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67165 ']' 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:06.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:06.124 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.382 [2024-12-09 05:54:00.733871] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:06.382 [2024-12-09 05:54:00.734443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:06.382 [2024-12-09 05:54:00.880840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:55:06.382 [2024-12-09 05:54:00.912461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:06.382 [2024-12-09 05:54:00.912531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:06.382 [2024-12-09 05:54:00.912558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:06.382 [2024-12-09 05:54:00.912565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:06.382 [2024-12-09 05:54:00.912571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:06.382 [2024-12-09 05:54:00.913447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:06.382 [2024-12-09 05:54:00.913537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:55:06.382 [2024-12-09 05:54:00.913730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:06.382 [2024-12-09 05:54:00.913730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:55:06.640 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:06.640 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:55:06.641 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:06.641 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:06.641 05:54:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.641 [2024-12-09 05:54:01.087485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.641 Malloc0 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:06.641 [2024-12-09 05:54:01.133999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67205 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67207 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:06.641 { 00:55:06.641 "params": { 00:55:06.641 "name": "Nvme$subsystem", 00:55:06.641 "trtype": "$TEST_TRANSPORT", 00:55:06.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:06.641 "adrfam": "ipv4", 00:55:06.641 "trsvcid": "$NVMF_PORT", 00:55:06.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:06.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:06.641 "hdgst": ${hdgst:-false}, 00:55:06.641 "ddgst": ${ddgst:-false} 00:55:06.641 }, 00:55:06.641 "method": "bdev_nvme_attach_controller" 00:55:06.641 } 00:55:06.641 EOF 00:55:06.641 )") 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67209 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:06.641 { 00:55:06.641 "params": { 00:55:06.641 "name": "Nvme$subsystem", 00:55:06.641 "trtype": "$TEST_TRANSPORT", 00:55:06.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:06.641 "adrfam": "ipv4", 00:55:06.641 "trsvcid": "$NVMF_PORT", 00:55:06.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:06.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:06.641 "hdgst": ${hdgst:-false}, 00:55:06.641 "ddgst": ${ddgst:-false} 00:55:06.641 }, 00:55:06.641 "method": "bdev_nvme_attach_controller" 00:55:06.641 } 00:55:06.641 EOF 00:55:06.641 )") 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67212 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:06.641 { 00:55:06.641 "params": { 00:55:06.641 "name": "Nvme$subsystem", 00:55:06.641 "trtype": "$TEST_TRANSPORT", 00:55:06.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:06.641 "adrfam": "ipv4", 00:55:06.641 "trsvcid": "$NVMF_PORT", 00:55:06.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:06.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:06.641 "hdgst": ${hdgst:-false}, 00:55:06.641 "ddgst": ${ddgst:-false} 00:55:06.641 }, 00:55:06.641 "method": "bdev_nvme_attach_controller" 00:55:06.641 } 00:55:06.641 EOF 00:55:06.641 )") 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:06.641 "params": { 00:55:06.641 "name": "Nvme1", 00:55:06.641 "trtype": "tcp", 00:55:06.641 "traddr": "10.0.0.3", 00:55:06.641 "adrfam": "ipv4", 00:55:06.641 "trsvcid": "4420", 00:55:06.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:06.641 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:06.641 "hdgst": false, 00:55:06.641 "ddgst": false 00:55:06.641 }, 00:55:06.641 "method": "bdev_nvme_attach_controller" 00:55:06.641 }' 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:06.641 "params": { 00:55:06.641 "name": "Nvme1", 00:55:06.641 "trtype": "tcp", 00:55:06.641 "traddr": "10.0.0.3", 00:55:06.641 "adrfam": "ipv4", 00:55:06.641 "trsvcid": "4420", 00:55:06.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:06.641 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:06.641 "hdgst": false, 00:55:06.641 "ddgst": false 00:55:06.641 }, 00:55:06.641 "method": "bdev_nvme_attach_controller" 00:55:06.641 }' 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:55:06.641 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:06.642 { 00:55:06.642 "params": { 00:55:06.642 "name": "Nvme$subsystem", 00:55:06.642 "trtype": "$TEST_TRANSPORT", 00:55:06.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:06.642 "adrfam": "ipv4", 00:55:06.642 "trsvcid": "$NVMF_PORT", 00:55:06.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:06.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:06.642 "hdgst": ${hdgst:-false}, 00:55:06.642 "ddgst": ${ddgst:-false} 00:55:06.642 }, 00:55:06.642 "method": "bdev_nvme_attach_controller" 00:55:06.642 } 00:55:06.642 EOF 00:55:06.642 )") 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:06.642 "params": { 00:55:06.642 "name": "Nvme1", 00:55:06.642 "trtype": "tcp", 00:55:06.642 "traddr": "10.0.0.3", 00:55:06.642 "adrfam": "ipv4", 00:55:06.642 "trsvcid": "4420", 00:55:06.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:06.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:06.642 "hdgst": false, 00:55:06.642 "ddgst": false 00:55:06.642 }, 00:55:06.642 "method": "bdev_nvme_attach_controller" 00:55:06.642 }' 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:06.642 "params": { 00:55:06.642 "name": "Nvme1", 00:55:06.642 "trtype": "tcp", 00:55:06.642 "traddr": "10.0.0.3", 00:55:06.642 "adrfam": "ipv4", 00:55:06.642 "trsvcid": "4420", 00:55:06.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:06.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:06.642 "hdgst": false, 00:55:06.642 "ddgst": false 00:55:06.642 }, 00:55:06.642 "method": "bdev_nvme_attach_controller" 00:55:06.642 }' 00:55:06.642 [2024-12-09 05:54:01.199471] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:06.642 [2024-12-09 05:54:01.199556] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:55:06.642 [2024-12-09 05:54:01.201186] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:06.642 [2024-12-09 05:54:01.201390] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:55:06.642 05:54:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67205 00:55:06.900 [2024-12-09 05:54:01.225410] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:06.900 [2024-12-09 05:54:01.225495] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:55:06.900 [2024-12-09 05:54:01.234509] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:06.900 [2024-12-09 05:54:01.235092] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:55:06.900 [2024-12-09 05:54:01.391209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:06.900 [2024-12-09 05:54:01.421870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:55:06.900 [2024-12-09 05:54:01.435873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:06.900 [2024-12-09 05:54:01.467018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:55:06.900 [2024-12-09 05:54:01.476627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:07.158 [2024-12-09 05:54:01.508004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:55:07.158 [2024-12-09 05:54:01.521050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:07.158 Running I/O for 1 seconds... 00:55:07.158 [2024-12-09 05:54:01.551666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:55:07.158 Running I/O for 1 seconds... 00:55:07.158 Running I/O for 1 seconds... 00:55:07.158 Running I/O for 1 seconds... 00:55:08.113 178272.00 IOPS, 696.38 MiB/s 00:55:08.113 Latency(us) 00:55:08.113 [2024-12-09T05:54:02.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:08.113 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:55:08.113 Nvme1n1 : 1.00 177921.67 695.01 0.00 0.00 715.40 297.89 1936.29 00:55:08.113 [2024-12-09T05:54:02.699Z] =================================================================================================================== 00:55:08.113 [2024-12-09T05:54:02.699Z] Total : 177921.67 695.01 0.00 0.00 715.40 297.89 1936.29 00:55:08.113 10134.00 IOPS, 39.59 MiB/s 00:55:08.113 Latency(us) 00:55:08.113 [2024-12-09T05:54:02.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:08.113 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:55:08.113 Nvme1n1 : 1.01 10191.70 39.81 0.00 0.00 12507.17 6047.19 18588.39 00:55:08.113 [2024-12-09T05:54:02.699Z] =================================================================================================================== 00:55:08.113 [2024-12-09T05:54:02.699Z] Total : 10191.70 39.81 0.00 0.00 12507.17 6047.19 18588.39 00:55:08.113 7300.00 IOPS, 28.52 MiB/s 00:55:08.113 Latency(us) 00:55:08.113 [2024-12-09T05:54:02.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:08.113 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:55:08.113 Nvme1n1 : 1.01 7344.93 28.69 0.00 0.00 17322.39 8400.52 24546.21 00:55:08.113 [2024-12-09T05:54:02.700Z] =================================================================================================================== 00:55:08.114 [2024-12-09T05:54:02.700Z] Total : 7344.93 28.69 0.00 0.00 17322.39 8400.52 24546.21 00:55:08.372 8496.00 IOPS, 33.19 MiB/s 00:55:08.372 Latency(us) 00:55:08.372 [2024-12-09T05:54:02.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:08.372 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:55:08.372 Nvme1n1 : 1.01 8570.21 33.48 0.00 0.00 14875.51 5183.30 23950.43 00:55:08.372 [2024-12-09T05:54:02.958Z] =================================================================================================================== 00:55:08.372 [2024-12-09T05:54:02.958Z] Total : 8570.21 33.48 0.00 0.00 14875.51 5183.30 23950.43 00:55:08.372 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67207 00:55:08.372 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67209 00:55:08.372 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67212 00:55:08.372 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:55:08.372 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:08.372 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:08.372 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:08.372 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:08.373 rmmod nvme_tcp 00:55:08.373 rmmod nvme_fabrics 00:55:08.373 rmmod nvme_keyring 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67165 ']' 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67165 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67165 ']' 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67165 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67165 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:08.373 killing process with pid 67165 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67165' 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67165 00:55:08.373 05:54:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67165 00:55:08.631 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:08.631 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:08.631 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:08.631 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:55:08.631 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:55:08.631 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:55:08.631 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:08.631 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:08.632 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:55:08.891 00:55:08.891 real 0m3.301s 00:55:08.891 user 0m12.768s 00:55:08.891 sys 0m1.984s 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:55:08.891 ************************************ 00:55:08.891 END TEST nvmf_bdev_io_wait 00:55:08.891 ************************************ 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:55:08.891 ************************************ 00:55:08.891 START TEST nvmf_queue_depth 00:55:08.891 ************************************ 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:55:08.891 * Looking for test storage... 00:55:08.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:55:08.891 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:09.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:09.151 --rc genhtml_branch_coverage=1 00:55:09.151 --rc genhtml_function_coverage=1 00:55:09.151 --rc genhtml_legend=1 00:55:09.151 --rc geninfo_all_blocks=1 00:55:09.151 --rc geninfo_unexecuted_blocks=1 00:55:09.151 00:55:09.151 ' 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:09.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:09.151 --rc genhtml_branch_coverage=1 00:55:09.151 --rc genhtml_function_coverage=1 00:55:09.151 --rc genhtml_legend=1 00:55:09.151 --rc geninfo_all_blocks=1 00:55:09.151 --rc geninfo_unexecuted_blocks=1 00:55:09.151 00:55:09.151 ' 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:09.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:09.151 --rc genhtml_branch_coverage=1 00:55:09.151 --rc genhtml_function_coverage=1 00:55:09.151 --rc genhtml_legend=1 00:55:09.151 --rc geninfo_all_blocks=1 00:55:09.151 --rc geninfo_unexecuted_blocks=1 00:55:09.151 00:55:09.151 ' 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:09.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:09.151 --rc genhtml_branch_coverage=1 00:55:09.151 --rc genhtml_function_coverage=1 00:55:09.151 --rc genhtml_legend=1 00:55:09.151 --rc geninfo_all_blocks=1 00:55:09.151 --rc geninfo_unexecuted_blocks=1 00:55:09.151 00:55:09.151 ' 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:09.151 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:09.152 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:09.152 Cannot find device "nvmf_init_br" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:09.152 Cannot find device "nvmf_init_br2" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:09.152 Cannot find device "nvmf_tgt_br" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:09.152 Cannot find device "nvmf_tgt_br2" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:09.152 Cannot find device "nvmf_init_br" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:09.152 Cannot find device "nvmf_init_br2" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:09.152 Cannot find device "nvmf_tgt_br" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:09.152 Cannot find device "nvmf_tgt_br2" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:09.152 Cannot find device "nvmf_br" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:09.152 Cannot find device "nvmf_init_if" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:09.152 Cannot find device "nvmf_init_if2" 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:09.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:09.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:09.152 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:09.412 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:09.412 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:55:09.412 00:55:09.412 --- 10.0.0.3 ping statistics --- 00:55:09.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:09.412 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:09.412 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:09.412 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:55:09.412 00:55:09.412 --- 10.0.0.4 ping statistics --- 00:55:09.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:09.412 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:09.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:09.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:55:09.412 00:55:09.412 --- 10.0.0.1 ping statistics --- 00:55:09.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:09.412 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:09.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:09.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:55:09.412 00:55:09.412 --- 10.0.0.2 ping statistics --- 00:55:09.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:09.412 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67466 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67466 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67466 ']' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:09.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:09.412 05:54:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.671 [2024-12-09 05:54:04.038711] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:09.671 [2024-12-09 05:54:04.038801] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:09.671 [2024-12-09 05:54:04.180424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:09.671 [2024-12-09 05:54:04.207833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:09.671 [2024-12-09 05:54:04.207898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:09.671 [2024-12-09 05:54:04.207923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:09.671 [2024-12-09 05:54:04.207930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:09.671 [2024-12-09 05:54:04.207937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:09.671 [2024-12-09 05:54:04.208216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.930 [2024-12-09 05:54:04.361648] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.930 Malloc0 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:09.930 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.931 [2024-12-09 05:54:04.403460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67503 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67503 /var/tmp/bdevperf.sock 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67503 ']' 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:09.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:09.931 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:09.931 [2024-12-09 05:54:04.455084] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:09.931 [2024-12-09 05:54:04.455179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67503 ] 00:55:10.189 [2024-12-09 05:54:04.598355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:10.189 [2024-12-09 05:54:04.627508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:10.189 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:10.189 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:55:10.189 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:10.189 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:10.189 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:10.448 NVMe0n1 00:55:10.448 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:10.448 05:54:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:55:10.448 Running I/O for 10 seconds... 00:55:12.320 9769.00 IOPS, 38.16 MiB/s [2024-12-09T05:54:08.279Z] 9951.50 IOPS, 38.87 MiB/s [2024-12-09T05:54:09.213Z] 10091.33 IOPS, 39.42 MiB/s [2024-12-09T05:54:10.169Z] 10195.50 IOPS, 39.83 MiB/s [2024-12-09T05:54:11.101Z] 10237.60 IOPS, 39.99 MiB/s [2024-12-09T05:54:12.035Z] 10353.83 IOPS, 40.44 MiB/s [2024-12-09T05:54:12.970Z] 10245.14 IOPS, 40.02 MiB/s [2024-12-09T05:54:13.907Z] 10329.62 IOPS, 40.35 MiB/s [2024-12-09T05:54:15.282Z] 10344.33 IOPS, 40.41 MiB/s [2024-12-09T05:54:15.282Z] 10363.30 IOPS, 40.48 MiB/s 00:55:20.696 Latency(us) 00:55:20.696 [2024-12-09T05:54:15.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:20.696 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:55:20.696 Verification LBA range: start 0x0 length 0x4000 00:55:20.696 NVMe0n1 : 10.06 10400.26 40.63 0.00 0.00 98067.79 14298.76 102474.47 00:55:20.696 [2024-12-09T05:54:15.282Z] =================================================================================================================== 00:55:20.696 [2024-12-09T05:54:15.282Z] Total : 10400.26 40.63 0.00 0.00 98067.79 14298.76 102474.47 00:55:20.696 { 00:55:20.696 "results": [ 00:55:20.696 { 00:55:20.696 "job": "NVMe0n1", 00:55:20.696 "core_mask": "0x1", 00:55:20.696 "workload": "verify", 00:55:20.696 "status": "finished", 00:55:20.696 "verify_range": { 00:55:20.696 "start": 0, 00:55:20.696 "length": 16384 00:55:20.696 }, 00:55:20.696 "queue_depth": 1024, 00:55:20.696 "io_size": 4096, 00:55:20.696 "runtime": 10.055129, 00:55:20.696 "iops": 10400.264382485793, 00:55:20.696 "mibps": 40.62603274408513, 00:55:20.696 "io_failed": 0, 00:55:20.696 "io_timeout": 0, 00:55:20.696 "avg_latency_us": 98067.78666073218, 00:55:20.696 "min_latency_us": 14298.763636363636, 00:55:20.696 "max_latency_us": 102474.47272727273 00:55:20.696 } 00:55:20.696 ], 00:55:20.696 "core_count": 1 00:55:20.696 } 00:55:20.696 05:54:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67503 00:55:20.696 05:54:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67503 ']' 00:55:20.696 05:54:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67503 00:55:20.696 05:54:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:55:20.696 05:54:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:20.696 05:54:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67503 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:20.696 killing process with pid 67503 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67503' 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67503 00:55:20.696 Received shutdown signal, test time was about 10.000000 seconds 00:55:20.696 00:55:20.696 Latency(us) 00:55:20.696 [2024-12-09T05:54:15.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:20.696 [2024-12-09T05:54:15.282Z] =================================================================================================================== 00:55:20.696 [2024-12-09T05:54:15.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67503 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:55:20.696 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:20.697 rmmod nvme_tcp 00:55:20.697 rmmod nvme_fabrics 00:55:20.697 rmmod nvme_keyring 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67466 ']' 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67466 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67466 ']' 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67466 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67466 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:55:20.697 killing process with pid 67466 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67466' 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67466 00:55:20.697 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67466 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:20.955 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:55:21.213 00:55:21.213 real 0m12.303s 00:55:21.213 user 0m20.854s 00:55:21.213 sys 0m1.970s 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:55:21.213 ************************************ 00:55:21.213 END TEST nvmf_queue_depth 00:55:21.213 ************************************ 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:55:21.213 ************************************ 00:55:21.213 START TEST nvmf_target_multipath 00:55:21.213 ************************************ 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:55:21.213 * Looking for test storage... 00:55:21.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:21.213 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:21.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:21.473 --rc genhtml_branch_coverage=1 00:55:21.473 --rc genhtml_function_coverage=1 00:55:21.473 --rc genhtml_legend=1 00:55:21.473 --rc geninfo_all_blocks=1 00:55:21.473 --rc geninfo_unexecuted_blocks=1 00:55:21.473 00:55:21.473 ' 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:21.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:21.473 --rc genhtml_branch_coverage=1 00:55:21.473 --rc genhtml_function_coverage=1 00:55:21.473 --rc genhtml_legend=1 00:55:21.473 --rc geninfo_all_blocks=1 00:55:21.473 --rc geninfo_unexecuted_blocks=1 00:55:21.473 00:55:21.473 ' 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:21.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:21.473 --rc genhtml_branch_coverage=1 00:55:21.473 --rc genhtml_function_coverage=1 00:55:21.473 --rc genhtml_legend=1 00:55:21.473 --rc geninfo_all_blocks=1 00:55:21.473 --rc geninfo_unexecuted_blocks=1 00:55:21.473 00:55:21.473 ' 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:21.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:21.473 --rc genhtml_branch_coverage=1 00:55:21.473 --rc genhtml_function_coverage=1 00:55:21.473 --rc genhtml_legend=1 00:55:21.473 --rc geninfo_all_blocks=1 00:55:21.473 --rc geninfo_unexecuted_blocks=1 00:55:21.473 00:55:21.473 ' 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:21.473 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:21.474 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:21.474 Cannot find device "nvmf_init_br" 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:21.474 Cannot find device "nvmf_init_br2" 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:21.474 Cannot find device "nvmf_tgt_br" 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:21.474 Cannot find device "nvmf_tgt_br2" 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:21.474 Cannot find device "nvmf_init_br" 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:55:21.474 05:54:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:21.474 Cannot find device "nvmf_init_br2" 00:55:21.474 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:55:21.474 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:21.474 Cannot find device "nvmf_tgt_br" 00:55:21.474 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:55:21.474 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:21.474 Cannot find device "nvmf_tgt_br2" 00:55:21.474 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:55:21.474 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:21.474 Cannot find device "nvmf_br" 00:55:21.474 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:55:21.474 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:21.474 Cannot find device "nvmf_init_if" 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:21.733 Cannot find device "nvmf_init_if2" 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:21.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:21.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:21.733 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:21.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:21.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:55:21.992 00:55:21.992 --- 10.0.0.3 ping statistics --- 00:55:21.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:21.992 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:21.992 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:21.992 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:55:21.992 00:55:21.992 --- 10.0.0.4 ping statistics --- 00:55:21.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:21.992 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:21.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:21.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:55:21.992 00:55:21.992 --- 10.0.0.1 ping statistics --- 00:55:21.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:21.992 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:21.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:21.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:55:21.992 00:55:21.992 --- 10.0.0.2 ping statistics --- 00:55:21.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:21.992 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:21.992 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67878 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67878 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67878 ']' 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:21.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:21.993 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:55:21.993 [2024-12-09 05:54:16.433907] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:21.993 [2024-12-09 05:54:16.433994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:22.252 [2024-12-09 05:54:16.581302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:55:22.252 [2024-12-09 05:54:16.611786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:22.252 [2024-12-09 05:54:16.611854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:22.252 [2024-12-09 05:54:16.611870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:22.252 [2024-12-09 05:54:16.611877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:22.252 [2024-12-09 05:54:16.611883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:22.252 [2024-12-09 05:54:16.612640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:22.252 [2024-12-09 05:54:16.612801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:55:22.252 [2024-12-09 05:54:16.612802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:22.252 [2024-12-09 05:54:16.612750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:55:22.252 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:22.252 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:55:22.252 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:22.252 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:22.252 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:55:22.252 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:22.252 05:54:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:55:22.510 [2024-12-09 05:54:17.038810] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:22.510 05:54:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:55:23.078 Malloc0 00:55:23.078 05:54:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:55:23.078 05:54:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:55:23.339 05:54:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:55:23.617 [2024-12-09 05:54:18.076628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:23.617 05:54:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:55:23.887 [2024-12-09 05:54:18.316902] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:55:23.887 05:54:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:55:24.146 05:54:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:55:24.406 05:54:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:55:24.406 05:54:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:55:24.406 05:54:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:55:24.406 05:54:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:55:24.406 05:54:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68003 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:55:26.309 05:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:55:26.309 [global] 00:55:26.309 thread=1 00:55:26.309 invalidate=1 00:55:26.309 rw=randrw 00:55:26.309 time_based=1 00:55:26.309 runtime=6 00:55:26.309 ioengine=libaio 00:55:26.309 direct=1 00:55:26.309 bs=4096 00:55:26.309 iodepth=128 00:55:26.309 norandommap=0 00:55:26.309 numjobs=1 00:55:26.309 00:55:26.309 verify_dump=1 00:55:26.309 verify_backlog=512 00:55:26.309 verify_state_save=0 00:55:26.309 do_verify=1 00:55:26.309 verify=crc32c-intel 00:55:26.309 [job0] 00:55:26.309 filename=/dev/nvme0n1 00:55:26.309 Could not set queue depth (nvme0n1) 00:55:26.567 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:55:26.567 fio-3.35 00:55:26.567 Starting 1 thread 00:55:27.501 05:54:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:55:27.758 05:54:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:55:29.131 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:55:29.131 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:29.131 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:55:29.131 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:55:29.131 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:55:29.390 05:54:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:55:30.775 05:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:55:30.776 05:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:30.776 05:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:55:30.776 05:54:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68003 00:55:32.681 00:55:32.681 job0: (groupid=0, jobs=1): err= 0: pid=68024: Mon Dec 9 05:54:27 2024 00:55:32.681 read: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(269MiB/6006msec) 00:55:32.681 slat (usec): min=2, max=7403, avg=50.72, stdev=229.55 00:55:32.681 clat (usec): min=1009, max=15043, avg=7578.03, stdev=1187.62 00:55:32.681 lat (usec): min=1783, max=15063, avg=7628.75, stdev=1197.37 00:55:32.681 clat percentiles (usec): 00:55:32.681 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 6783], 00:55:32.681 | 30.00th=[ 6915], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 7767], 00:55:32.681 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9634], 00:55:32.681 | 99.00th=[11207], 99.50th=[11600], 99.90th=[13304], 99.95th=[14222], 00:55:32.681 | 99.99th=[14353] 00:55:32.681 bw ( KiB/s): min= 8144, max=29632, per=52.17%, avg=23936.00, stdev=6169.10, samples=11 00:55:32.681 iops : min= 2036, max= 7408, avg=5984.00, stdev=1542.27, samples=11 00:55:32.681 write: IOPS=6801, BW=26.6MiB/s (27.9MB/s)(143MiB/5376msec); 0 zone resets 00:55:32.681 slat (usec): min=3, max=1960, avg=59.48, stdev=154.91 00:55:32.681 clat (usec): min=957, max=14465, avg=6559.88, stdev=973.37 00:55:32.681 lat (usec): min=1005, max=14779, avg=6619.36, stdev=977.54 00:55:32.681 clat percentiles (usec): 00:55:32.681 | 1.00th=[ 3687], 5.00th=[ 4883], 10.00th=[ 5538], 20.00th=[ 5932], 00:55:32.681 | 30.00th=[ 6194], 40.00th=[ 6390], 50.00th=[ 6587], 60.00th=[ 6783], 00:55:32.681 | 70.00th=[ 6980], 80.00th=[ 7177], 90.00th=[ 7570], 95.00th=[ 7963], 00:55:32.681 | 99.00th=[ 9372], 99.50th=[ 9896], 99.90th=[11338], 99.95th=[11994], 00:55:32.681 | 99.99th=[13304] 00:55:32.681 bw ( KiB/s): min= 8464, max=29192, per=88.08%, avg=23963.64, stdev=5868.43, samples=11 00:55:32.681 iops : min= 2116, max= 7298, avg=5990.91, stdev=1467.11, samples=11 00:55:32.681 lat (usec) : 1000=0.01% 00:55:32.681 lat (msec) : 2=0.01%, 4=0.79%, 10=96.57%, 20=2.63% 00:55:32.681 cpu : usr=5.75%, sys=21.73%, ctx=6658, majf=0, minf=102 00:55:32.681 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:55:32.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:32.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:55:32.681 issued rwts: total=68889,36564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:32.681 latency : target=0, window=0, percentile=100.00%, depth=128 00:55:32.681 00:55:32.681 Run status group 0 (all jobs): 00:55:32.681 READ: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=269MiB (282MB), run=6006-6006msec 00:55:32.681 WRITE: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=143MiB (150MB), run=5376-5376msec 00:55:32.681 00:55:32.681 Disk stats (read/write): 00:55:32.681 nvme0n1: ios=68189/35682, merge=0/0, ticks=481724/218021, in_queue=699745, util=98.60% 00:55:32.681 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:55:32.941 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:55:33.200 05:54:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:55:34.581 05:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:55:34.581 05:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:34.581 05:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:55:34.581 05:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:55:34.581 05:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68159 00:55:34.581 05:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:55:34.581 05:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:55:34.581 [global] 00:55:34.581 thread=1 00:55:34.581 invalidate=1 00:55:34.581 rw=randrw 00:55:34.581 time_based=1 00:55:34.581 runtime=6 00:55:34.581 ioengine=libaio 00:55:34.581 direct=1 00:55:34.581 bs=4096 00:55:34.581 iodepth=128 00:55:34.581 norandommap=0 00:55:34.581 numjobs=1 00:55:34.581 00:55:34.581 verify_dump=1 00:55:34.581 verify_backlog=512 00:55:34.581 verify_state_save=0 00:55:34.581 do_verify=1 00:55:34.581 verify=crc32c-intel 00:55:34.581 [job0] 00:55:34.581 filename=/dev/nvme0n1 00:55:34.581 Could not set queue depth (nvme0n1) 00:55:34.581 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:55:34.581 fio-3.35 00:55:34.581 Starting 1 thread 00:55:35.521 05:54:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:55:35.521 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:55:36.090 05:54:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:55:37.024 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:55:37.024 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:37.024 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:55:37.024 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:55:37.282 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:55:37.540 05:54:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:55:38.475 05:54:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:55:38.475 05:54:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:55:38.475 05:54:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:55:38.475 05:54:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68159 00:55:41.002 00:55:41.002 job0: (groupid=0, jobs=1): err= 0: pid=68180: Mon Dec 9 05:54:35 2024 00:55:41.002 read: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(298MiB/6003msec) 00:55:41.002 slat (usec): min=3, max=5401, avg=40.51, stdev=198.18 00:55:41.002 clat (usec): min=252, max=14522, avg=7005.57, stdev=1470.59 00:55:41.002 lat (usec): min=282, max=14532, avg=7046.07, stdev=1488.20 00:55:41.003 clat percentiles (usec): 00:55:41.003 | 1.00th=[ 3163], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 5932], 00:55:41.003 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7308], 00:55:41.003 | 70.00th=[ 7635], 80.00th=[ 8094], 90.00th=[ 8586], 95.00th=[ 9110], 00:55:41.003 | 99.00th=[10683], 99.50th=[11207], 99.90th=[12518], 99.95th=[13304], 00:55:41.003 | 99.99th=[13960] 00:55:41.003 bw ( KiB/s): min= 6432, max=45456, per=53.94%, avg=27409.82, stdev=11231.67, samples=11 00:55:41.003 iops : min= 1608, max=11364, avg=6852.45, stdev=2807.92, samples=11 00:55:41.003 write: IOPS=7866, BW=30.7MiB/s (32.2MB/s)(154MiB/5014msec); 0 zone resets 00:55:41.003 slat (usec): min=14, max=1868, avg=50.40, stdev=131.38 00:55:41.003 clat (usec): min=297, max=12460, avg=5773.12, stdev=1506.35 00:55:41.003 lat (usec): min=365, max=12490, avg=5823.52, stdev=1520.46 00:55:41.003 clat percentiles (usec): 00:55:41.003 | 1.00th=[ 2507], 5.00th=[ 3130], 10.00th=[ 3523], 20.00th=[ 4178], 00:55:41.003 | 30.00th=[ 4883], 40.00th=[ 5800], 50.00th=[ 6194], 60.00th=[ 6521], 00:55:41.003 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7373], 95.00th=[ 7635], 00:55:41.003 | 99.00th=[ 8717], 99.50th=[ 9503], 99.90th=[11076], 99.95th=[11731], 00:55:41.003 | 99.99th=[12256] 00:55:41.003 bw ( KiB/s): min= 6640, max=44648, per=87.04%, avg=27386.55, stdev=11005.74, samples=11 00:55:41.003 iops : min= 1660, max=11162, avg=6846.64, stdev=2751.43, samples=11 00:55:41.003 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:55:41.003 lat (msec) : 2=0.11%, 4=8.00%, 10=90.36%, 20=1.49% 00:55:41.003 cpu : usr=6.26%, sys=23.69%, ctx=7504, majf=0, minf=127 00:55:41.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:55:41.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:41.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:55:41.003 issued rwts: total=76259,39441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:41.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:55:41.003 00:55:41.003 Run status group 0 (all jobs): 00:55:41.003 READ: bw=49.6MiB/s (52.0MB/s), 49.6MiB/s-49.6MiB/s (52.0MB/s-52.0MB/s), io=298MiB (312MB), run=6003-6003msec 00:55:41.003 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=154MiB (162MB), run=5014-5014msec 00:55:41.003 00:55:41.003 Disk stats (read/write): 00:55:41.003 nvme0n1: ios=74541/39441, merge=0/0, ticks=489275/211217, in_queue=700492, util=98.55% 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:55:41.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:55:41.003 rmmod nvme_tcp 00:55:41.003 rmmod nvme_fabrics 00:55:41.003 rmmod nvme_keyring 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67878 ']' 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67878 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67878 ']' 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67878 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67878 00:55:41.003 killing process with pid 67878 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67878' 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67878 00:55:41.003 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67878 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:55:41.261 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:55:41.520 ************************************ 00:55:41.520 END TEST nvmf_target_multipath 00:55:41.520 ************************************ 00:55:41.520 00:55:41.520 real 0m20.245s 00:55:41.520 user 1m18.034s 00:55:41.520 sys 0m6.814s 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:55:41.520 05:54:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:55:41.520 05:54:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:55:41.520 05:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:55:41.520 05:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:55:41.520 05:54:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:55:41.520 ************************************ 00:55:41.520 START TEST nvmf_zcopy 00:55:41.520 ************************************ 00:55:41.520 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:55:41.520 * Looking for test storage... 00:55:41.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:41.520 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:55:41.520 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:55:41.520 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:55:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:41.780 --rc genhtml_branch_coverage=1 00:55:41.780 --rc genhtml_function_coverage=1 00:55:41.780 --rc genhtml_legend=1 00:55:41.780 --rc geninfo_all_blocks=1 00:55:41.780 --rc geninfo_unexecuted_blocks=1 00:55:41.780 00:55:41.780 ' 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:55:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:41.780 --rc genhtml_branch_coverage=1 00:55:41.780 --rc genhtml_function_coverage=1 00:55:41.780 --rc genhtml_legend=1 00:55:41.780 --rc geninfo_all_blocks=1 00:55:41.780 --rc geninfo_unexecuted_blocks=1 00:55:41.780 00:55:41.780 ' 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:55:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:41.780 --rc genhtml_branch_coverage=1 00:55:41.780 --rc genhtml_function_coverage=1 00:55:41.780 --rc genhtml_legend=1 00:55:41.780 --rc geninfo_all_blocks=1 00:55:41.780 --rc geninfo_unexecuted_blocks=1 00:55:41.780 00:55:41.780 ' 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:55:41.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:55:41.780 --rc genhtml_branch_coverage=1 00:55:41.780 --rc genhtml_function_coverage=1 00:55:41.780 --rc genhtml_legend=1 00:55:41.780 --rc geninfo_all_blocks=1 00:55:41.780 --rc geninfo_unexecuted_blocks=1 00:55:41.780 00:55:41.780 ' 00:55:41.780 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:55:41.781 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:55:41.781 Cannot find device "nvmf_init_br" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:55:41.781 Cannot find device "nvmf_init_br2" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:55:41.781 Cannot find device "nvmf_tgt_br" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:55:41.781 Cannot find device "nvmf_tgt_br2" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:55:41.781 Cannot find device "nvmf_init_br" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:55:41.781 Cannot find device "nvmf_init_br2" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:55:41.781 Cannot find device "nvmf_tgt_br" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:55:41.781 Cannot find device "nvmf_tgt_br2" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:55:41.781 Cannot find device "nvmf_br" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:55:41.781 Cannot find device "nvmf_init_if" 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:55:41.781 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:55:41.781 Cannot find device "nvmf_init_if2" 00:55:41.782 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:55:41.782 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:41.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:41.782 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:55:41.782 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:41.782 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:41.782 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:55:41.782 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:55:41.782 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:41.782 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:55:42.040 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:55:42.040 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:42.040 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:55:42.040 00:55:42.041 --- 10.0.0.3 ping statistics --- 00:55:42.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:42.041 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:55:42.041 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:55:42.041 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:55:42.041 00:55:42.041 --- 10.0.0.4 ping statistics --- 00:55:42.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:42.041 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:42.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:42.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:55:42.041 00:55:42.041 --- 10.0.0.1 ping statistics --- 00:55:42.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:42.041 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:55:42.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:42.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:55:42.041 00:55:42.041 --- 10.0.0.2 ping statistics --- 00:55:42.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:42.041 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:42.041 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68512 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68512 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 68512 ']' 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:55:42.300 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.300 [2024-12-09 05:54:36.688545] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:42.300 [2024-12-09 05:54:36.688632] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:42.300 [2024-12-09 05:54:36.838266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:42.300 [2024-12-09 05:54:36.865027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:42.300 [2024-12-09 05:54:36.865076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:42.300 [2024-12-09 05:54:36.865086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:42.300 [2024-12-09 05:54:36.865092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:42.300 [2024-12-09 05:54:36.865098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:42.300 [2024-12-09 05:54:36.865348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.559 05:54:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.559 [2024-12-09 05:54:37.004701] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.559 [2024-12-09 05:54:37.024792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.559 malloc0 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:42.559 { 00:55:42.559 "params": { 00:55:42.559 "name": "Nvme$subsystem", 00:55:42.559 "trtype": "$TEST_TRANSPORT", 00:55:42.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:42.559 "adrfam": "ipv4", 00:55:42.559 "trsvcid": "$NVMF_PORT", 00:55:42.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:42.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:42.559 "hdgst": ${hdgst:-false}, 00:55:42.559 "ddgst": ${ddgst:-false} 00:55:42.559 }, 00:55:42.559 "method": "bdev_nvme_attach_controller" 00:55:42.559 } 00:55:42.559 EOF 00:55:42.559 )") 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:55:42.559 05:54:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:42.559 "params": { 00:55:42.559 "name": "Nvme1", 00:55:42.559 "trtype": "tcp", 00:55:42.559 "traddr": "10.0.0.3", 00:55:42.559 "adrfam": "ipv4", 00:55:42.559 "trsvcid": "4420", 00:55:42.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:42.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:42.559 "hdgst": false, 00:55:42.559 "ddgst": false 00:55:42.559 }, 00:55:42.559 "method": "bdev_nvme_attach_controller" 00:55:42.559 }' 00:55:42.559 [2024-12-09 05:54:37.113091] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:42.559 [2024-12-09 05:54:37.113179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68550 ] 00:55:42.819 [2024-12-09 05:54:37.265820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:42.819 [2024-12-09 05:54:37.304571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:43.079 Running I/O for 10 seconds... 00:55:44.957 7059.00 IOPS, 55.15 MiB/s [2024-12-09T05:54:40.483Z] 7155.00 IOPS, 55.90 MiB/s [2024-12-09T05:54:41.862Z] 7186.67 IOPS, 56.15 MiB/s [2024-12-09T05:54:42.798Z] 7198.00 IOPS, 56.23 MiB/s [2024-12-09T05:54:43.759Z] 7216.20 IOPS, 56.38 MiB/s [2024-12-09T05:54:44.695Z] 7238.00 IOPS, 56.55 MiB/s [2024-12-09T05:54:45.632Z] 7253.86 IOPS, 56.67 MiB/s [2024-12-09T05:54:46.568Z] 7264.62 IOPS, 56.75 MiB/s [2024-12-09T05:54:47.534Z] 7270.56 IOPS, 56.80 MiB/s [2024-12-09T05:54:47.534Z] 7266.40 IOPS, 56.77 MiB/s 00:55:52.949 Latency(us) 00:55:52.949 [2024-12-09T05:54:47.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:52.949 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:55:52.949 Verification LBA range: start 0x0 length 0x1000 00:55:52.949 Nvme1n1 : 10.01 7269.96 56.80 0.00 0.00 17551.07 1288.38 29908.25 00:55:52.949 [2024-12-09T05:54:47.535Z] =================================================================================================================== 00:55:52.949 [2024-12-09T05:54:47.535Z] Total : 7269.96 56.80 0.00 0.00 17551.07 1288.38 29908.25 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68667 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:55:53.208 { 00:55:53.208 "params": { 00:55:53.208 "name": "Nvme$subsystem", 00:55:53.208 "trtype": "$TEST_TRANSPORT", 00:55:53.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:55:53.208 "adrfam": "ipv4", 00:55:53.208 "trsvcid": "$NVMF_PORT", 00:55:53.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:55:53.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:55:53.208 "hdgst": ${hdgst:-false}, 00:55:53.208 "ddgst": ${ddgst:-false} 00:55:53.208 }, 00:55:53.208 "method": "bdev_nvme_attach_controller" 00:55:53.208 } 00:55:53.208 EOF 00:55:53.208 )") 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:55:53.208 [2024-12-09 05:54:47.597557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.208 [2024-12-09 05:54:47.597622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:55:53.208 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:55:53.208 05:54:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:55:53.208 "params": { 00:55:53.208 "name": "Nvme1", 00:55:53.208 "trtype": "tcp", 00:55:53.208 "traddr": "10.0.0.3", 00:55:53.208 "adrfam": "ipv4", 00:55:53.208 "trsvcid": "4420", 00:55:53.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:55:53.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:55:53.208 "hdgst": false, 00:55:53.208 "ddgst": false 00:55:53.208 }, 00:55:53.208 "method": "bdev_nvme_attach_controller" 00:55:53.208 }' 00:55:53.208 [2024-12-09 05:54:47.609523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.208 [2024-12-09 05:54:47.609551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.208 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.208 [2024-12-09 05:54:47.621523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.208 [2024-12-09 05:54:47.621549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.208 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.208 [2024-12-09 05:54:47.633524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.208 [2024-12-09 05:54:47.633549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.208 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.208 [2024-12-09 05:54:47.645531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.208 [2024-12-09 05:54:47.645557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.208 [2024-12-09 05:54:47.647704] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:55:53.208 [2024-12-09 05:54:47.647943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68667 ] 00:55:53.208 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.208 [2024-12-09 05:54:47.657553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.208 [2024-12-09 05:54:47.657578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.208 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.208 [2024-12-09 05:54:47.669550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.208 [2024-12-09 05:54:47.669574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.681557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.681580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.693557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.693580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.705561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.705584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.717563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.717588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.729557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.729581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.745550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.745575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.757577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.757622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.769580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.769627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.209 [2024-12-09 05:54:47.781579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.209 [2024-12-09 05:54:47.781626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.209 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.468 [2024-12-09 05:54:47.792740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:53.468 [2024-12-09 05:54:47.793626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.468 [2024-12-09 05:54:47.793707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.468 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.468 [2024-12-09 05:54:47.805650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.468 [2024-12-09 05:54:47.805698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.468 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.468 [2024-12-09 05:54:47.817606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.817678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.823464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:55:53.469 [2024-12-09 05:54:47.829594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.829667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.841665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.841711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.853654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.853700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.865666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.865708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.877666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.877706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.889647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.889702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.901671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.901710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.913668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.913703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.925673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.925711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.937692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.937722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.949687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.949715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.961698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.961731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 Running I/O for 5 seconds... 00:55:53.469 [2024-12-09 05:54:47.973671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.973709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:47.991592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:47.991639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:48.007512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:48.007559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:48.025710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:48.025743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.469 [2024-12-09 05:54:48.041099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.469 [2024-12-09 05:54:48.041144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.469 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.058725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.058769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.073572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.073642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.089352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.089398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.106794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.106841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.122216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.122263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.133371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.133401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.150736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.150783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.164814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.164860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.180189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.180236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.197860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.197909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.214305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.728 [2024-12-09 05:54:48.214351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.728 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.728 [2024-12-09 05:54:48.231149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.729 [2024-12-09 05:54:48.231195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.729 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.729 [2024-12-09 05:54:48.248085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.729 [2024-12-09 05:54:48.248131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.729 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.729 [2024-12-09 05:54:48.265113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.729 [2024-12-09 05:54:48.265160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.729 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.729 [2024-12-09 05:54:48.280350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.729 [2024-12-09 05:54:48.280398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.729 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.729 [2024-12-09 05:54:48.296121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.729 [2024-12-09 05:54:48.296153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.729 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.729 [2024-12-09 05:54:48.307691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.729 [2024-12-09 05:54:48.307765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.729 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.988 [2024-12-09 05:54:48.325404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.988 [2024-12-09 05:54:48.325451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.988 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.988 [2024-12-09 05:54:48.339510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.988 [2024-12-09 05:54:48.339556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.988 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.988 [2024-12-09 05:54:48.356413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.988 [2024-12-09 05:54:48.356461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.988 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.988 [2024-12-09 05:54:48.371212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.988 [2024-12-09 05:54:48.371258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.988 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.988 [2024-12-09 05:54:48.386464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.988 [2024-12-09 05:54:48.386509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.988 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.988 [2024-12-09 05:54:48.403064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.988 [2024-12-09 05:54:48.403110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.419173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.419218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.430403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.430449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.447020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.447082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.463847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.463911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.479017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.479062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.493449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.493496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.508551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.508597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.524036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.524083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.541434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.541480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:53.989 [2024-12-09 05:54:48.557761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:53.989 [2024-12-09 05:54:48.557809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:53.989 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.249 [2024-12-09 05:54:48.575608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.249 [2024-12-09 05:54:48.575680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.249 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.249 [2024-12-09 05:54:48.589138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.249 [2024-12-09 05:54:48.589169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.249 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.249 [2024-12-09 05:54:48.605134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.249 [2024-12-09 05:54:48.605180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.249 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.249 [2024-12-09 05:54:48.620726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.249 [2024-12-09 05:54:48.620772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.249 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.249 [2024-12-09 05:54:48.636009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.249 [2024-12-09 05:54:48.636055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.249 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.249 [2024-12-09 05:54:48.647320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.249 [2024-12-09 05:54:48.647366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.249 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.249 [2024-12-09 05:54:48.663025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.249 [2024-12-09 05:54:48.663072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.249 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.680107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.680152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.697047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.697092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.714104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.714150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.730019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.730064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.745765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.745797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.763433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.763479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.778120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.778154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.792668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.792699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.809500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.809547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.250 [2024-12-09 05:54:48.822857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.250 [2024-12-09 05:54:48.822901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.250 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.838838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.838884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.856101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.856147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.872716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.872760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.889784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.889833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.906718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.906763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.922534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.922579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.933673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.933729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.950441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.950486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.966176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.966222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 13101.00 IOPS, 102.35 MiB/s [2024-12-09T05:54:49.096Z] 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.984010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.984055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:48.999027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:48.999091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:49.014547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:49.014593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:49.030477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:49.030522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:49.047602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:49.047673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:49.063622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:49.063681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.510 [2024-12-09 05:54:49.080374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.510 [2024-12-09 05:54:49.080420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.510 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.098331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.098377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.111639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.111693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.127432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.127478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.144459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.144522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.161131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.161176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.178258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.178304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.194908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.194954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.212035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.212081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.229572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.229686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.243969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.244016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.259408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.259453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.276502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.276548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.292426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.292474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.304030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.304093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.320837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.320882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.336284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.770 [2024-12-09 05:54:49.336329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.770 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:54.770 [2024-12-09 05:54:49.352166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:54.771 [2024-12-09 05:54:49.352213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:54.771 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.362595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.362640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.376813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.376859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.392723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.392768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.410217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.410264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.425340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.425386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.441169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.441216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.459081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.459126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.474799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.474844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.489308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.489354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.505054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.505102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.522057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.522104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.539363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.539410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.555657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.555701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.572976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.573021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.589258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.589304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.031 [2024-12-09 05:54:49.606520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.031 [2024-12-09 05:54:49.606567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.031 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.621007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.621070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.637298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.637344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.653389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.653435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.669965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.669996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.687149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.687195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.703048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.703093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.714877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.714923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.729690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.729737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.745241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.745289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.762915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.762960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.779194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.779240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.795909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.795954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.812920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.291 [2024-12-09 05:54:49.812967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.291 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.291 [2024-12-09 05:54:49.828581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.292 [2024-12-09 05:54:49.828629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.292 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.292 [2024-12-09 05:54:49.844677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.292 [2024-12-09 05:54:49.844721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.292 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.292 [2024-12-09 05:54:49.861861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.292 [2024-12-09 05:54:49.861910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.292 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.551 [2024-12-09 05:54:49.878417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.551 [2024-12-09 05:54:49.878479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.551 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.551 [2024-12-09 05:54:49.894968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.551 [2024-12-09 05:54:49.895013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.551 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.551 [2024-12-09 05:54:49.910940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.551 [2024-12-09 05:54:49.910986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.551 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.551 [2024-12-09 05:54:49.928358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.551 [2024-12-09 05:54:49.928404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.551 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.551 [2024-12-09 05:54:49.944872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.551 [2024-12-09 05:54:49.944917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.551 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.551 [2024-12-09 05:54:49.961737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.551 [2024-12-09 05:54:49.961784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.551 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 13139.00 IOPS, 102.65 MiB/s [2024-12-09T05:54:50.138Z] [2024-12-09 05:54:49.978064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:49.978109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:49.995167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:49.995212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:50.013417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:50.013468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:50.029325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:50.029384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:50.044923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:50.044956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:50.054613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:50.054687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:50.070552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:50.070598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:50.087928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:50.087961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:50.104351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:50.104396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.552 [2024-12-09 05:54:50.120904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.552 [2024-12-09 05:54:50.120950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.552 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.811 [2024-12-09 05:54:50.138409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.811 [2024-12-09 05:54:50.138459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.811 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.811 [2024-12-09 05:54:50.153733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.811 [2024-12-09 05:54:50.153767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.169062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.169109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.184416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.184462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.201898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.201977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.218266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.218312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.235733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.235778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.251058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.251104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.265914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.265975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.277874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.277907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.295430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.295476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.309013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.309058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.324658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.324714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.342343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.342391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.358918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.358963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.375150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.375197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:55.812 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:55.812 [2024-12-09 05:54:50.392534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:55.812 [2024-12-09 05:54:50.392582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.407428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.407475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.423565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.423627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.439679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.439726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.451399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.451431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.468350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.468398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.483297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.483343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.499183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.499230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.515431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.515478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.532157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.532203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.548673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.548701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.564915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.564961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.072 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.072 [2024-12-09 05:54:50.576229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.072 [2024-12-09 05:54:50.576274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.073 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.073 [2024-12-09 05:54:50.592098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.073 [2024-12-09 05:54:50.592144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.073 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.073 [2024-12-09 05:54:50.608385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.073 [2024-12-09 05:54:50.608430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.073 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.073 [2024-12-09 05:54:50.625393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.073 [2024-12-09 05:54:50.625439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.073 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.073 [2024-12-09 05:54:50.641351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.073 [2024-12-09 05:54:50.641397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.073 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.332 [2024-12-09 05:54:50.659171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.332 [2024-12-09 05:54:50.659235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.332 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.674125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.674170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.690570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.690616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.706244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.706290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.723535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.723582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.740093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.740138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.757049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.757096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.774164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.774209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.790631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.790687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.808095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.808141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.824908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.824953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.842036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.842081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.859180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.859227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.875606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.875676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.891585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.891631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.333 [2024-12-09 05:54:50.908700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.333 [2024-12-09 05:54:50.908726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.333 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.593 [2024-12-09 05:54:50.924221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.593 [2024-12-09 05:54:50.924268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:50.940465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:50.940511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:50.958274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:50.958341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 13144.67 IOPS, 102.69 MiB/s [2024-12-09T05:54:51.180Z] [2024-12-09 05:54:50.972522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:50.972568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:50.986060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:50.986106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.001655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.001697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.018515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.018561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.030740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.030784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.047301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.047349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.062406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.062454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.077896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.077960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.093586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.093682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.111186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.111231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.128319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.128365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.145224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.145269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.160708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.160753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.594 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.594 [2024-12-09 05:54:51.175976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.594 [2024-12-09 05:54:51.176024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.854 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.854 [2024-12-09 05:54:51.194010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.854 [2024-12-09 05:54:51.194073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.854 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.854 [2024-12-09 05:54:51.208324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.854 [2024-12-09 05:54:51.208370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.854 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.854 [2024-12-09 05:54:51.224724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.854 [2024-12-09 05:54:51.224770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.854 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.854 [2024-12-09 05:54:51.240688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.854 [2024-12-09 05:54:51.240732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.854 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.854 [2024-12-09 05:54:51.257090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.854 [2024-12-09 05:54:51.257136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.854 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.274458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.274504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.291379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.291441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.307382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.307428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.324973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.325020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.341761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.341791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.358066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.358110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.375023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.375070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.391577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.391623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.407949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.407996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:56.855 [2024-12-09 05:54:51.424555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:56.855 [2024-12-09 05:54:51.424601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:56.855 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.442023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.442072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.456642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.456713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.472103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.472150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.489432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.489479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.504446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.504492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.519292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.519338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.535032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.535078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.552881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.552926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.569325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.569371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.584941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.584988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.594726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.594767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.610197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.610242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.621925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.621958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.637585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.637680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.654104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.654150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.115 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.115 [2024-12-09 05:54:51.671132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.115 [2024-12-09 05:54:51.671177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.116 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.116 [2024-12-09 05:54:51.687925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.116 [2024-12-09 05:54:51.687973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.116 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.375 [2024-12-09 05:54:51.706109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.375 [2024-12-09 05:54:51.706155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.721171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.721217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.733518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.733563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.749091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.749137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.760764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.760809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.776245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.776290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.794021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.794067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.809788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.809818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.825081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.825126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.841732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.841763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.858723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.858768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.876013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.876074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.892470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.892516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.909047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.909094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.925573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.925643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.376 [2024-12-09 05:54:51.942763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.376 [2024-12-09 05:54:51.942808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.376 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:51.959969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:51.960015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 13158.00 IOPS, 102.80 MiB/s [2024-12-09T05:54:52.222Z] [2024-12-09 05:54:51.975838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:51.975882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:51.993333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:51.993379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.010677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.010716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.026999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.027045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.043908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.043954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.060410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.060456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.077119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.077149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.092814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.092862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.110524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.110571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.126597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.126643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.143456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.143503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.160430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.160476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.176765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.176811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.636 [2024-12-09 05:54:52.193160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.636 [2024-12-09 05:54:52.193205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.636 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.637 [2024-12-09 05:54:52.210392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.637 [2024-12-09 05:54:52.210438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.637 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.226968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.227013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.242646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.242702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.254239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.254283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.269482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.269528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.285191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.285236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.302028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.302074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.319225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.319272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.335133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.335178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.347358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.347404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.362392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.896 [2024-12-09 05:54:52.362438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.896 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.896 [2024-12-09 05:54:52.376459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.897 [2024-12-09 05:54:52.376505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.897 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.897 [2024-12-09 05:54:52.392426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.897 [2024-12-09 05:54:52.392471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.897 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.897 [2024-12-09 05:54:52.409237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.897 [2024-12-09 05:54:52.409282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.897 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.897 [2024-12-09 05:54:52.426226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.897 [2024-12-09 05:54:52.426271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.897 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.897 [2024-12-09 05:54:52.442496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.897 [2024-12-09 05:54:52.442542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.897 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.897 [2024-12-09 05:54:52.459522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.897 [2024-12-09 05:54:52.459567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:57.897 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:57.897 [2024-12-09 05:54:52.476407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:57.897 [2024-12-09 05:54:52.476454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.156 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.156 [2024-12-09 05:54:52.492391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.156 [2024-12-09 05:54:52.492436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.156 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.156 [2024-12-09 05:54:52.502361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.156 [2024-12-09 05:54:52.502406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.516257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.516303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.531013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.531059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.546798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.546845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.562793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.562825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.572610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.572681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.588235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.588281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.598076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.598107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.612521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.612567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.627015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.627061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.642331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.642377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.659239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.659285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.676526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.676572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.692462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.692507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.709557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.709626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.157 [2024-12-09 05:54:52.724680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.157 [2024-12-09 05:54:52.724725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.157 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.740603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.740661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.756698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.756744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.774370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.774415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.790902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.790949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.806990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.807036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.823621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.823678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.840701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.840746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.856623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.856681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.873687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.873720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.890767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.890812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.417 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.417 [2024-12-09 05:54:52.907192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.417 [2024-12-09 05:54:52.907237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.418 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.418 [2024-12-09 05:54:52.924430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.418 [2024-12-09 05:54:52.924476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.418 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.418 [2024-12-09 05:54:52.940808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.418 [2024-12-09 05:54:52.940855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.418 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.418 [2024-12-09 05:54:52.957146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.418 [2024-12-09 05:54:52.957192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.418 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.418 13186.80 IOPS, 103.02 MiB/s [2024-12-09T05:54:53.004Z] [2024-12-09 05:54:52.973701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.418 [2024-12-09 05:54:52.973732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.418 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.418 00:55:58.418 Latency(us) 00:55:58.418 [2024-12-09T05:54:53.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:58.418 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:55:58.418 Nvme1n1 : 5.01 13190.20 103.05 0.00 0.00 9692.64 4140.68 21090.68 00:55:58.418 [2024-12-09T05:54:53.004Z] =================================================================================================================== 00:55:58.418 [2024-12-09T05:54:53.004Z] Total : 13190.20 103.05 0.00 0.00 9692.64 4140.68 21090.68 00:55:58.418 [2024-12-09 05:54:52.985222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.418 [2024-12-09 05:54:52.985251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.418 2024/12/09 05:54:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.418 [2024-12-09 05:54:52.997239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.418 [2024-12-09 05:54:52.997271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.009276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.009328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.021265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.021303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.033266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.033303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.045273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.045314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.057272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.057312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.069253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.069280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.081270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.081303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.093274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.093318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 [2024-12-09 05:54:53.105260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:55:58.678 [2024-12-09 05:54:53.105285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:55:58.678 2024/12/09 05:54:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:55:58.678 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68667) - No such process 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68667 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:58.678 delay0 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:55:58.678 05:54:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:55:58.938 [2024-12-09 05:54:53.303111] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:56:05.503 Initializing NVMe Controllers 00:56:05.503 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:56:05.503 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:56:05.503 Initialization complete. Launching workers. 00:56:05.503 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 46 00:56:05.503 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 333, failed to submit 33 00:56:05.503 success 143, unsuccessful 190, failed 0 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:05.503 rmmod nvme_tcp 00:56:05.503 rmmod nvme_fabrics 00:56:05.503 rmmod nvme_keyring 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68512 ']' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68512 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 68512 ']' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 68512 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68512 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:56:05.503 killing process with pid 68512 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68512' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 68512 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 68512 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:56:05.503 00:56:05.503 real 0m23.849s 00:56:05.503 user 0m38.590s 00:56:05.503 sys 0m6.525s 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:56:05.503 ************************************ 00:56:05.503 END TEST nvmf_zcopy 00:56:05.503 ************************************ 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:56:05.503 ************************************ 00:56:05.503 START TEST nvmf_nmic 00:56:05.503 ************************************ 00:56:05.503 05:54:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:56:05.503 * Looking for test storage... 00:56:05.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:05.503 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:05.503 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:56:05.503 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:05.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:05.762 --rc genhtml_branch_coverage=1 00:56:05.762 --rc genhtml_function_coverage=1 00:56:05.762 --rc genhtml_legend=1 00:56:05.762 --rc geninfo_all_blocks=1 00:56:05.762 --rc geninfo_unexecuted_blocks=1 00:56:05.762 00:56:05.762 ' 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:05.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:05.762 --rc genhtml_branch_coverage=1 00:56:05.762 --rc genhtml_function_coverage=1 00:56:05.762 --rc genhtml_legend=1 00:56:05.762 --rc geninfo_all_blocks=1 00:56:05.762 --rc geninfo_unexecuted_blocks=1 00:56:05.762 00:56:05.762 ' 00:56:05.762 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:05.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:05.762 --rc genhtml_branch_coverage=1 00:56:05.762 --rc genhtml_function_coverage=1 00:56:05.762 --rc genhtml_legend=1 00:56:05.762 --rc geninfo_all_blocks=1 00:56:05.762 --rc geninfo_unexecuted_blocks=1 00:56:05.762 00:56:05.763 ' 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:05.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:05.763 --rc genhtml_branch_coverage=1 00:56:05.763 --rc genhtml_function_coverage=1 00:56:05.763 --rc genhtml_legend=1 00:56:05.763 --rc geninfo_all_blocks=1 00:56:05.763 --rc geninfo_unexecuted_blocks=1 00:56:05.763 00:56:05.763 ' 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:05.763 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:56:05.763 Cannot find device "nvmf_init_br" 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:56:05.763 Cannot find device "nvmf_init_br2" 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:56:05.763 Cannot find device "nvmf_tgt_br" 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:56:05.763 Cannot find device "nvmf_tgt_br2" 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:56:05.763 Cannot find device "nvmf_init_br" 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:56:05.763 Cannot find device "nvmf_init_br2" 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:56:05.763 Cannot find device "nvmf_tgt_br" 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:56:05.763 Cannot find device "nvmf_tgt_br2" 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:56:05.763 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:56:05.764 Cannot find device "nvmf_br" 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:56:05.764 Cannot find device "nvmf_init_if" 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:56:05.764 Cannot find device "nvmf_init_if2" 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:05.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:05.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:05.764 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:56:06.023 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:06.023 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:56:06.023 00:56:06.023 --- 10.0.0.3 ping statistics --- 00:56:06.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:06.023 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:56:06.023 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:56:06.023 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:56:06.023 00:56:06.023 --- 10.0.0.4 ping statistics --- 00:56:06.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:06.023 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:06.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:06.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:56:06.023 00:56:06.023 --- 10.0.0.1 ping statistics --- 00:56:06.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:06.023 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:56:06.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:06.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:56:06.023 00:56:06.023 --- 10.0.0.2 ping statistics --- 00:56:06.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:06.023 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69038 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69038 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69038 ']' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:06.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:06.023 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.282 [2024-12-09 05:55:00.610072] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:56:06.282 [2024-12-09 05:55:00.610163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:06.282 [2024-12-09 05:55:00.761498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:06.283 [2024-12-09 05:55:00.803600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:06.283 [2024-12-09 05:55:00.803684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:06.283 [2024-12-09 05:55:00.803700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:06.283 [2024-12-09 05:55:00.803710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:06.283 [2024-12-09 05:55:00.803719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:06.283 [2024-12-09 05:55:00.804591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:06.283 [2024-12-09 05:55:00.805264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:56:06.283 [2024-12-09 05:55:00.805370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:56:06.283 [2024-12-09 05:55:00.805374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.542 [2024-12-09 05:55:00.955520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.542 Malloc0 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.542 05:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.542 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.542 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:56:06.542 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.543 [2024-12-09 05:55:01.016552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.543 test case1: single bdev can't be used in multiple subsystems 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.543 [2024-12-09 05:55:01.040394] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:56:06.543 [2024-12-09 05:55:01.040428] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:56:06.543 [2024-12-09 05:55:01.040454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:56:06.543 2024/12/09 05:55:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:56:06.543 request: 00:56:06.543 { 00:56:06.543 "method": "nvmf_subsystem_add_ns", 00:56:06.543 "params": { 00:56:06.543 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:56:06.543 "namespace": { 00:56:06.543 "bdev_name": "Malloc0", 00:56:06.543 "no_auto_visible": false, 00:56:06.543 "hide_metadata": false 00:56:06.543 } 00:56:06.543 } 00:56:06.543 } 00:56:06.543 Got JSON-RPC error response 00:56:06.543 GoRPCClient: error on JSON-RPC call 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:56:06.543 Adding namespace failed - expected result. 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:56:06.543 test case2: host connect to nvmf target in multiple paths 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:06.543 [2024-12-09 05:55:01.052503] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:06.543 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:56:06.802 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:56:07.060 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:56:07.060 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:56:07.060 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:56:07.060 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:56:07.060 05:55:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:56:08.965 05:55:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:56:08.965 05:55:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:56:08.965 05:55:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:56:08.965 05:55:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:56:08.965 05:55:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:56:08.965 05:55:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:56:08.965 05:55:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:56:08.965 [global] 00:56:08.965 thread=1 00:56:08.965 invalidate=1 00:56:08.965 rw=write 00:56:08.965 time_based=1 00:56:08.965 runtime=1 00:56:08.965 ioengine=libaio 00:56:08.965 direct=1 00:56:08.965 bs=4096 00:56:08.965 iodepth=1 00:56:08.965 norandommap=0 00:56:08.965 numjobs=1 00:56:08.965 00:56:08.965 verify_dump=1 00:56:08.965 verify_backlog=512 00:56:08.965 verify_state_save=0 00:56:08.965 do_verify=1 00:56:08.965 verify=crc32c-intel 00:56:08.965 [job0] 00:56:08.965 filename=/dev/nvme0n1 00:56:08.965 Could not set queue depth (nvme0n1) 00:56:09.224 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:09.224 fio-3.35 00:56:09.224 Starting 1 thread 00:56:10.159 00:56:10.159 job0: (groupid=0, jobs=1): err= 0: pid=69134: Mon Dec 9 05:55:04 2024 00:56:10.159 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:56:10.159 slat (nsec): min=10754, max=57339, avg=13393.87, stdev=4125.77 00:56:10.159 clat (usec): min=115, max=426, avg=137.30, stdev=20.23 00:56:10.159 lat (usec): min=126, max=440, avg=150.69, stdev=21.05 00:56:10.159 clat percentiles (usec): 00:56:10.159 | 1.00th=[ 119], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 125], 00:56:10.159 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:56:10.159 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 172], 00:56:10.159 | 99.00th=[ 200], 99.50th=[ 258], 99.90th=[ 334], 99.95th=[ 388], 00:56:10.159 | 99.99th=[ 429] 00:56:10.159 write: IOPS=3850, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1001msec); 0 zone resets 00:56:10.159 slat (usec): min=15, max=102, avg=19.22, stdev= 5.47 00:56:10.159 clat (usec): min=81, max=267, avg=97.31, stdev=12.17 00:56:10.159 lat (usec): min=98, max=316, avg=116.53, stdev=13.99 00:56:10.159 clat percentiles (usec): 00:56:10.159 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 89], 00:56:10.160 | 30.00th=[ 91], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:56:10.160 | 70.00th=[ 98], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 123], 00:56:10.160 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 161], 99.95th=[ 215], 00:56:10.160 | 99.99th=[ 269] 00:56:10.160 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:56:10.160 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:56:10.160 lat (usec) : 100=38.73%, 250=61.01%, 500=0.26% 00:56:10.160 cpu : usr=2.70%, sys=9.10%, ctx=7438, majf=0, minf=5 00:56:10.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:10.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:10.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:10.160 issued rwts: total=3584,3854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:10.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:10.160 00:56:10.160 Run status group 0 (all jobs): 00:56:10.160 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:56:10.160 WRITE: bw=15.0MiB/s (15.8MB/s), 15.0MiB/s-15.0MiB/s (15.8MB/s-15.8MB/s), io=15.1MiB (15.8MB), run=1001-1001msec 00:56:10.160 00:56:10.160 Disk stats (read/write): 00:56:10.160 nvme0n1: ios=3155/3584, merge=0/0, ticks=473/387, in_queue=860, util=91.28% 00:56:10.160 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:56:10.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:10.419 rmmod nvme_tcp 00:56:10.419 rmmod nvme_fabrics 00:56:10.419 rmmod nvme_keyring 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69038 ']' 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69038 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69038 ']' 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69038 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69038 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:10.419 killing process with pid 69038 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69038' 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69038 00:56:10.419 05:55:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69038 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:10.685 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:56:10.944 00:56:10.944 real 0m5.393s 00:56:10.944 user 0m16.809s 00:56:10.944 sys 0m1.455s 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:56:10.944 ************************************ 00:56:10.944 END TEST nvmf_nmic 00:56:10.944 ************************************ 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:56:10.944 ************************************ 00:56:10.944 START TEST nvmf_fio_target 00:56:10.944 ************************************ 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:56:10.944 * Looking for test storage... 00:56:10.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:56:10.944 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:10.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:10.945 --rc genhtml_branch_coverage=1 00:56:10.945 --rc genhtml_function_coverage=1 00:56:10.945 --rc genhtml_legend=1 00:56:10.945 --rc geninfo_all_blocks=1 00:56:10.945 --rc geninfo_unexecuted_blocks=1 00:56:10.945 00:56:10.945 ' 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:10.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:10.945 --rc genhtml_branch_coverage=1 00:56:10.945 --rc genhtml_function_coverage=1 00:56:10.945 --rc genhtml_legend=1 00:56:10.945 --rc geninfo_all_blocks=1 00:56:10.945 --rc geninfo_unexecuted_blocks=1 00:56:10.945 00:56:10.945 ' 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:10.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:10.945 --rc genhtml_branch_coverage=1 00:56:10.945 --rc genhtml_function_coverage=1 00:56:10.945 --rc genhtml_legend=1 00:56:10.945 --rc geninfo_all_blocks=1 00:56:10.945 --rc geninfo_unexecuted_blocks=1 00:56:10.945 00:56:10.945 ' 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:10.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:10.945 --rc genhtml_branch_coverage=1 00:56:10.945 --rc genhtml_function_coverage=1 00:56:10.945 --rc genhtml_legend=1 00:56:10.945 --rc geninfo_all_blocks=1 00:56:10.945 --rc geninfo_unexecuted_blocks=1 00:56:10.945 00:56:10.945 ' 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:10.945 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:11.204 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:56:11.204 Cannot find device "nvmf_init_br" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:56:11.204 Cannot find device "nvmf_init_br2" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:56:11.204 Cannot find device "nvmf_tgt_br" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:56:11.204 Cannot find device "nvmf_tgt_br2" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:56:11.204 Cannot find device "nvmf_init_br" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:56:11.204 Cannot find device "nvmf_init_br2" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:56:11.204 Cannot find device "nvmf_tgt_br" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:56:11.204 Cannot find device "nvmf_tgt_br2" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:56:11.204 Cannot find device "nvmf_br" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:56:11.204 Cannot find device "nvmf_init_if" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:56:11.204 Cannot find device "nvmf_init_if2" 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:11.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:11.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:11.204 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:11.205 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:56:11.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:11.463 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:56:11.463 00:56:11.463 --- 10.0.0.3 ping statistics --- 00:56:11.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:11.463 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:56:11.463 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:56:11.463 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:56:11.463 00:56:11.463 --- 10.0.0.4 ping statistics --- 00:56:11.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:11.463 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:11.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:11.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:56:11.463 00:56:11.463 --- 10.0.0.1 ping statistics --- 00:56:11.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:11.463 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:56:11.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:11.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:56:11.463 00:56:11.463 --- 10.0.0.2 ping statistics --- 00:56:11.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:11.463 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69366 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69366 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69366 ']' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:11.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:11.463 05:55:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:56:11.463 [2024-12-09 05:55:05.957565] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:56:11.463 [2024-12-09 05:55:05.957673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:11.720 [2024-12-09 05:55:06.118171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:11.720 [2024-12-09 05:55:06.158885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:11.720 [2024-12-09 05:55:06.158944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:11.720 [2024-12-09 05:55:06.158958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:11.720 [2024-12-09 05:55:06.158968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:11.720 [2024-12-09 05:55:06.158977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:11.720 [2024-12-09 05:55:06.159931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:11.720 [2024-12-09 05:55:06.160057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:56:11.720 [2024-12-09 05:55:06.160161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:56:11.720 [2024-12-09 05:55:06.160165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:11.720 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:11.720 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:56:11.720 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:11.720 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:11.720 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:56:11.720 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:11.720 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:56:12.286 [2024-12-09 05:55:06.589441] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:12.286 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:56:12.544 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:56:12.544 05:55:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:56:12.804 05:55:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:56:12.804 05:55:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:56:13.063 05:55:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:56:13.063 05:55:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:56:13.321 05:55:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:56:13.321 05:55:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:56:13.580 05:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:56:13.838 05:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:56:13.838 05:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:56:14.097 05:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:56:14.097 05:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:56:14.356 05:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:56:14.356 05:55:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:56:14.614 05:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:56:14.872 05:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:56:14.872 05:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:56:15.131 05:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:56:15.131 05:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:56:15.389 05:55:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:56:15.648 [2024-12-09 05:55:10.171954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:56:15.648 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:56:15.906 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:56:16.166 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:56:16.424 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:56:16.424 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:56:16.424 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:56:16.424 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:56:16.424 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:56:16.424 05:55:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:56:18.323 05:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:56:18.323 05:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:56:18.323 05:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:56:18.323 05:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:56:18.323 05:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:56:18.323 05:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:56:18.323 05:55:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:56:18.584 [global] 00:56:18.584 thread=1 00:56:18.584 invalidate=1 00:56:18.584 rw=write 00:56:18.584 time_based=1 00:56:18.584 runtime=1 00:56:18.584 ioengine=libaio 00:56:18.584 direct=1 00:56:18.584 bs=4096 00:56:18.584 iodepth=1 00:56:18.584 norandommap=0 00:56:18.584 numjobs=1 00:56:18.584 00:56:18.584 verify_dump=1 00:56:18.584 verify_backlog=512 00:56:18.584 verify_state_save=0 00:56:18.584 do_verify=1 00:56:18.584 verify=crc32c-intel 00:56:18.584 [job0] 00:56:18.584 filename=/dev/nvme0n1 00:56:18.584 [job1] 00:56:18.584 filename=/dev/nvme0n2 00:56:18.584 [job2] 00:56:18.584 filename=/dev/nvme0n3 00:56:18.584 [job3] 00:56:18.584 filename=/dev/nvme0n4 00:56:18.584 Could not set queue depth (nvme0n1) 00:56:18.584 Could not set queue depth (nvme0n2) 00:56:18.584 Could not set queue depth (nvme0n3) 00:56:18.584 Could not set queue depth (nvme0n4) 00:56:18.584 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:18.584 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:18.584 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:18.584 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:18.584 fio-3.35 00:56:18.584 Starting 4 threads 00:56:19.962 00:56:19.962 job0: (groupid=0, jobs=1): err= 0: pid=69653: Mon Dec 9 05:55:14 2024 00:56:19.962 read: IOPS=1810, BW=7241KiB/s (7415kB/s)(7248KiB/1001msec) 00:56:19.962 slat (nsec): min=11671, max=52583, avg=14441.85, stdev=3439.89 00:56:19.962 clat (usec): min=222, max=1666, avg=275.11, stdev=46.59 00:56:19.962 lat (usec): min=234, max=1682, avg=289.56, stdev=47.18 00:56:19.962 clat percentiles (usec): 00:56:19.962 | 1.00th=[ 247], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 258], 00:56:19.962 | 30.00th=[ 260], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:56:19.962 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 314], 00:56:19.962 | 99.00th=[ 416], 99.50th=[ 469], 99.90th=[ 783], 99.95th=[ 1663], 00:56:19.962 | 99.99th=[ 1663] 00:56:19.962 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:56:19.962 slat (usec): min=16, max=140, avg=22.11, stdev= 5.53 00:56:19.962 clat (usec): min=100, max=658, avg=206.78, stdev=22.81 00:56:19.962 lat (usec): min=121, max=680, avg=228.89, stdev=23.23 00:56:19.963 clat percentiles (usec): 00:56:19.963 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:56:19.963 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:56:19.963 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 231], 95.00th=[ 241], 00:56:19.963 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 482], 99.95th=[ 562], 00:56:19.963 | 99.99th=[ 660] 00:56:19.963 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:56:19.963 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:56:19.963 lat (usec) : 250=54.04%, 500=45.78%, 750=0.13%, 1000=0.03% 00:56:19.963 lat (msec) : 2=0.03% 00:56:19.963 cpu : usr=1.20%, sys=5.60%, ctx=3860, majf=0, minf=7 00:56:19.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:19.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.963 issued rwts: total=1812,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:19.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:19.963 job1: (groupid=0, jobs=1): err= 0: pid=69654: Mon Dec 9 05:55:14 2024 00:56:19.963 read: IOPS=1858, BW=7433KiB/s (7611kB/s)(7440KiB/1001msec) 00:56:19.963 slat (nsec): min=11666, max=43512, avg=14495.34, stdev=3379.36 00:56:19.963 clat (usec): min=133, max=572, avg=267.50, stdev=37.90 00:56:19.963 lat (usec): min=145, max=585, avg=281.99, stdev=38.16 00:56:19.963 clat percentiles (usec): 00:56:19.963 | 1.00th=[ 141], 5.00th=[ 178], 10.00th=[ 251], 20.00th=[ 255], 00:56:19.963 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 273], 00:56:19.963 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 314], 00:56:19.963 | 99.00th=[ 388], 99.50th=[ 392], 99.90th=[ 441], 99.95th=[ 570], 00:56:19.963 | 99.99th=[ 570] 00:56:19.963 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:56:19.963 slat (usec): min=16, max=111, avg=21.64, stdev= 4.88 00:56:19.963 clat (usec): min=102, max=700, avg=207.46, stdev=21.64 00:56:19.963 lat (usec): min=123, max=722, avg=229.09, stdev=21.91 00:56:19.963 clat percentiles (usec): 00:56:19.963 | 1.00th=[ 172], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 196], 00:56:19.963 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:56:19.963 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 239], 00:56:19.963 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 375], 99.95th=[ 379], 00:56:19.963 | 99.99th=[ 701] 00:56:19.963 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:56:19.963 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:56:19.963 lat (usec) : 250=55.86%, 500=44.09%, 750=0.05% 00:56:19.963 cpu : usr=1.40%, sys=5.40%, ctx=3908, majf=0, minf=9 00:56:19.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:19.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.963 issued rwts: total=1860,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:19.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:19.963 job2: (groupid=0, jobs=1): err= 0: pid=69655: Mon Dec 9 05:55:14 2024 00:56:19.963 read: IOPS=1852, BW=7409KiB/s (7586kB/s)(7416KiB/1001msec) 00:56:19.963 slat (nsec): min=12522, max=52684, avg=16311.47, stdev=4042.50 00:56:19.963 clat (usec): min=154, max=759, avg=267.68, stdev=31.15 00:56:19.963 lat (usec): min=183, max=787, avg=283.99, stdev=31.49 00:56:19.963 clat percentiles (usec): 00:56:19.963 | 1.00th=[ 190], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 253], 00:56:19.963 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 265], 00:56:19.963 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:56:19.963 | 99.00th=[ 334], 99.50th=[ 437], 99.90th=[ 668], 99.95th=[ 758], 00:56:19.963 | 99.99th=[ 758] 00:56:19.963 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:56:19.963 slat (usec): min=17, max=120, avg=22.34, stdev= 6.05 00:56:19.963 clat (usec): min=110, max=1542, avg=205.40, stdev=34.82 00:56:19.963 lat (usec): min=130, max=1562, avg=227.74, stdev=34.99 00:56:19.963 clat percentiles (usec): 00:56:19.963 | 1.00th=[ 137], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:56:19.963 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 204], 00:56:19.963 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 237], 00:56:19.963 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 351], 99.95th=[ 400], 00:56:19.963 | 99.99th=[ 1549] 00:56:19.963 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:56:19.963 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:56:19.963 lat (usec) : 250=58.59%, 500=41.21%, 750=0.15%, 1000=0.03% 00:56:19.963 lat (msec) : 2=0.03% 00:56:19.963 cpu : usr=1.20%, sys=6.00%, ctx=3902, majf=0, minf=6 00:56:19.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:19.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.963 issued rwts: total=1854,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:19.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:19.963 job3: (groupid=0, jobs=1): err= 0: pid=69656: Mon Dec 9 05:55:14 2024 00:56:19.963 read: IOPS=1804, BW=7217KiB/s (7390kB/s)(7224KiB/1001msec) 00:56:19.963 slat (nsec): min=12261, max=46581, avg=15717.68, stdev=3446.27 00:56:19.963 clat (usec): min=165, max=5379, avg=276.84, stdev=190.87 00:56:19.963 lat (usec): min=180, max=5392, avg=292.56, stdev=191.19 00:56:19.963 clat percentiles (usec): 00:56:19.963 | 1.00th=[ 225], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 253], 00:56:19.963 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:56:19.963 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:56:19.963 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 4015], 99.95th=[ 5407], 00:56:19.963 | 99.99th=[ 5407] 00:56:19.963 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:56:19.963 slat (usec): min=17, max=129, avg=21.61, stdev= 5.26 00:56:19.963 clat (usec): min=111, max=2024, avg=205.55, stdev=46.44 00:56:19.963 lat (usec): min=130, max=2048, avg=227.16, stdev=46.64 00:56:19.963 clat percentiles (usec): 00:56:19.963 | 1.00th=[ 123], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 196], 00:56:19.963 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 204], 00:56:19.963 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 237], 00:56:19.963 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 461], 99.95th=[ 693], 00:56:19.963 | 99.99th=[ 2024] 00:56:19.963 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:56:19.963 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:56:19.963 lat (usec) : 250=57.76%, 500=42.03%, 750=0.05% 00:56:19.963 lat (msec) : 2=0.03%, 4=0.08%, 10=0.05% 00:56:19.963 cpu : usr=1.60%, sys=5.40%, ctx=3854, majf=0, minf=17 00:56:19.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:19.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:19.963 issued rwts: total=1806,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:19.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:19.963 00:56:19.963 Run status group 0 (all jobs): 00:56:19.963 READ: bw=28.6MiB/s (30.0MB/s), 7217KiB/s-7433KiB/s (7390kB/s-7611kB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:56:19.963 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:56:19.963 00:56:19.963 Disk stats (read/write): 00:56:19.963 nvme0n1: ios=1586/1768, merge=0/0, ticks=460/376, in_queue=836, util=87.68% 00:56:19.963 nvme0n2: ios=1585/1847, merge=0/0, ticks=441/389, in_queue=830, util=88.36% 00:56:19.963 nvme0n3: ios=1536/1811, merge=0/0, ticks=420/395, in_queue=815, util=88.72% 00:56:19.963 nvme0n4: ios=1536/1743, merge=0/0, ticks=423/377, in_queue=800, util=89.12% 00:56:19.963 05:55:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:56:19.963 [global] 00:56:19.963 thread=1 00:56:19.963 invalidate=1 00:56:19.963 rw=randwrite 00:56:19.963 time_based=1 00:56:19.963 runtime=1 00:56:19.963 ioengine=libaio 00:56:19.963 direct=1 00:56:19.963 bs=4096 00:56:19.963 iodepth=1 00:56:19.963 norandommap=0 00:56:19.963 numjobs=1 00:56:19.963 00:56:19.963 verify_dump=1 00:56:19.963 verify_backlog=512 00:56:19.963 verify_state_save=0 00:56:19.963 do_verify=1 00:56:19.963 verify=crc32c-intel 00:56:19.963 [job0] 00:56:19.963 filename=/dev/nvme0n1 00:56:19.963 [job1] 00:56:19.963 filename=/dev/nvme0n2 00:56:19.963 [job2] 00:56:19.963 filename=/dev/nvme0n3 00:56:19.963 [job3] 00:56:19.963 filename=/dev/nvme0n4 00:56:19.963 Could not set queue depth (nvme0n1) 00:56:19.963 Could not set queue depth (nvme0n2) 00:56:19.963 Could not set queue depth (nvme0n3) 00:56:19.963 Could not set queue depth (nvme0n4) 00:56:19.963 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:19.963 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:19.963 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:19.963 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:19.963 fio-3.35 00:56:19.963 Starting 4 threads 00:56:21.365 00:56:21.365 job0: (groupid=0, jobs=1): err= 0: pid=69709: Mon Dec 9 05:55:15 2024 00:56:21.365 read: IOPS=2140, BW=8563KiB/s (8769kB/s)(8572KiB/1001msec) 00:56:21.365 slat (nsec): min=10346, max=44617, avg=14449.76, stdev=3520.04 00:56:21.365 clat (usec): min=142, max=774, avg=222.09, stdev=60.39 00:56:21.365 lat (usec): min=158, max=791, avg=236.54, stdev=58.74 00:56:21.365 clat percentiles (usec): 00:56:21.365 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:56:21.365 | 30.00th=[ 165], 40.00th=[ 178], 50.00th=[ 245], 60.00th=[ 258], 00:56:21.365 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 310], 00:56:21.365 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 408], 99.95th=[ 416], 00:56:21.365 | 99.99th=[ 775] 00:56:21.365 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:56:21.365 slat (nsec): min=10927, max=89802, avg=22584.08, stdev=6408.42 00:56:21.365 clat (usec): min=103, max=620, avg=166.85, stdev=45.35 00:56:21.365 lat (usec): min=128, max=637, avg=189.44, stdev=43.59 00:56:21.365 clat percentiles (usec): 00:56:21.365 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 126], 00:56:21.365 | 30.00th=[ 130], 40.00th=[ 137], 50.00th=[ 147], 60.00th=[ 188], 00:56:21.365 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 239], 00:56:21.365 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 330], 99.95th=[ 474], 00:56:21.365 | 99.99th=[ 619] 00:56:21.365 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:56:21.365 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:56:21.365 lat (usec) : 250=76.67%, 500=23.28%, 750=0.02%, 1000=0.02% 00:56:21.365 cpu : usr=1.70%, sys=7.00%, ctx=4706, majf=0, minf=17 00:56:21.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:21.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:21.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:21.365 issued rwts: total=2143,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:21.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:21.365 job1: (groupid=0, jobs=1): err= 0: pid=69710: Mon Dec 9 05:55:15 2024 00:56:21.365 read: IOPS=2119, BW=8480KiB/s (8683kB/s)(8488KiB/1001msec) 00:56:21.365 slat (nsec): min=8327, max=46677, avg=12658.37, stdev=2793.54 00:56:21.365 clat (usec): min=139, max=409, avg=220.25, stdev=59.41 00:56:21.365 lat (usec): min=151, max=422, avg=232.91, stdev=59.11 00:56:21.365 clat percentiles (usec): 00:56:21.365 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:56:21.365 | 30.00th=[ 161], 40.00th=[ 176], 50.00th=[ 245], 60.00th=[ 258], 00:56:21.365 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 306], 00:56:21.365 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 404], 99.95th=[ 408], 00:56:21.365 | 99.99th=[ 412] 00:56:21.365 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:56:21.365 slat (nsec): min=10934, max=71540, avg=20080.66, stdev=5498.74 00:56:21.365 clat (usec): min=103, max=5357, avg=174.74, stdev=183.52 00:56:21.366 lat (usec): min=120, max=5377, avg=194.83, stdev=183.71 00:56:21.366 clat percentiles (usec): 00:56:21.366 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 123], 00:56:21.366 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 141], 60.00th=[ 190], 00:56:21.366 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 231], 95.00th=[ 245], 00:56:21.366 | 99.00th=[ 281], 99.50th=[ 437], 99.90th=[ 4015], 99.95th=[ 4080], 00:56:21.366 | 99.99th=[ 5342] 00:56:21.366 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:56:21.366 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:56:21.366 lat (usec) : 250=76.36%, 500=23.39%, 750=0.09% 00:56:21.366 lat (msec) : 2=0.06%, 4=0.04%, 10=0.06% 00:56:21.366 cpu : usr=1.40%, sys=6.20%, ctx=4684, majf=0, minf=7 00:56:21.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:21.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:21.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:21.366 issued rwts: total=2122,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:21.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:21.366 job2: (groupid=0, jobs=1): err= 0: pid=69711: Mon Dec 9 05:55:15 2024 00:56:21.366 read: IOPS=2827, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:56:21.366 slat (nsec): min=11742, max=62687, avg=13658.94, stdev=3698.44 00:56:21.366 clat (usec): min=139, max=1992, avg=170.06, stdev=68.80 00:56:21.366 lat (usec): min=151, max=2016, avg=183.72, stdev=69.55 00:56:21.366 clat percentiles (usec): 00:56:21.366 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 155], 00:56:21.366 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:56:21.366 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 196], 00:56:21.366 | 99.00th=[ 293], 99.50th=[ 457], 99.90th=[ 1942], 99.95th=[ 1942], 00:56:21.366 | 99.99th=[ 1991] 00:56:21.366 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:56:21.366 slat (nsec): min=14763, max=77565, avg=20059.87, stdev=5920.24 00:56:21.366 clat (usec): min=105, max=1526, avg=133.20, stdev=30.20 00:56:21.366 lat (usec): min=123, max=1545, avg=153.26, stdev=31.23 00:56:21.366 clat percentiles (usec): 00:56:21.366 | 1.00th=[ 114], 5.00th=[ 117], 10.00th=[ 119], 20.00th=[ 123], 00:56:21.366 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:56:21.366 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 159], 00:56:21.366 | 99.00th=[ 182], 99.50th=[ 237], 99.90th=[ 322], 99.95th=[ 343], 00:56:21.366 | 99.99th=[ 1532] 00:56:21.366 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:56:21.366 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:56:21.366 lat (usec) : 250=99.17%, 500=0.63%, 750=0.08%, 1000=0.05% 00:56:21.366 lat (msec) : 2=0.07% 00:56:21.366 cpu : usr=1.70%, sys=7.90%, ctx=5902, majf=0, minf=11 00:56:21.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:21.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:21.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:21.366 issued rwts: total=2830,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:21.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:21.366 job3: (groupid=0, jobs=1): err= 0: pid=69712: Mon Dec 9 05:55:15 2024 00:56:21.366 read: IOPS=2897, BW=11.3MiB/s (11.9MB/s)(11.3MiB/1001msec) 00:56:21.366 slat (nsec): min=11622, max=55414, avg=15066.41, stdev=4658.08 00:56:21.366 clat (usec): min=138, max=563, avg=164.94, stdev=17.62 00:56:21.366 lat (usec): min=151, max=576, avg=180.01, stdev=18.70 00:56:21.366 clat percentiles (usec): 00:56:21.366 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 153], 00:56:21.366 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:56:21.366 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 196], 00:56:21.366 | 99.00th=[ 210], 99.50th=[ 212], 99.90th=[ 239], 99.95th=[ 510], 00:56:21.366 | 99.99th=[ 562] 00:56:21.366 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:56:21.366 slat (nsec): min=17113, max=74615, avg=20796.87, stdev=5734.52 00:56:21.366 clat (usec): min=105, max=425, avg=131.55, stdev=15.07 00:56:21.366 lat (usec): min=123, max=457, avg=152.35, stdev=16.67 00:56:21.366 clat percentiles (usec): 00:56:21.366 | 1.00th=[ 113], 5.00th=[ 116], 10.00th=[ 118], 20.00th=[ 121], 00:56:21.366 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:56:21.366 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 159], 00:56:21.366 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 204], 99.95th=[ 302], 00:56:21.366 | 99.99th=[ 424] 00:56:21.366 bw ( KiB/s): min=12288, max=12288, per=27.30%, avg=12288.00, stdev= 0.00, samples=1 00:56:21.366 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:56:21.366 lat (usec) : 250=99.93%, 500=0.03%, 750=0.03% 00:56:21.366 cpu : usr=2.20%, sys=8.10%, ctx=5973, majf=0, minf=11 00:56:21.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:21.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:21.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:21.366 issued rwts: total=2900,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:21.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:21.366 00:56:21.366 Run status group 0 (all jobs): 00:56:21.366 READ: bw=39.0MiB/s (40.9MB/s), 8480KiB/s-11.3MiB/s (8683kB/s-11.9MB/s), io=39.0MiB (40.9MB), run=1001-1001msec 00:56:21.366 WRITE: bw=44.0MiB/s (46.1MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.0MiB (46.1MB), run=1001-1001msec 00:56:21.366 00:56:21.366 Disk stats (read/write): 00:56:21.366 nvme0n1: ios=2098/2089, merge=0/0, ticks=475/344, in_queue=819, util=87.27% 00:56:21.366 nvme0n2: ios=2086/2059, merge=0/0, ticks=466/346, in_queue=812, util=87.93% 00:56:21.366 nvme0n3: ios=2491/2560, merge=0/0, ticks=421/363, in_queue=784, util=89.20% 00:56:21.366 nvme0n4: ios=2553/2560, merge=0/0, ticks=423/365, in_queue=788, util=89.77% 00:56:21.366 05:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:56:21.366 [global] 00:56:21.366 thread=1 00:56:21.366 invalidate=1 00:56:21.366 rw=write 00:56:21.366 time_based=1 00:56:21.366 runtime=1 00:56:21.366 ioengine=libaio 00:56:21.366 direct=1 00:56:21.366 bs=4096 00:56:21.366 iodepth=128 00:56:21.366 norandommap=0 00:56:21.366 numjobs=1 00:56:21.366 00:56:21.366 verify_dump=1 00:56:21.366 verify_backlog=512 00:56:21.366 verify_state_save=0 00:56:21.366 do_verify=1 00:56:21.366 verify=crc32c-intel 00:56:21.366 [job0] 00:56:21.366 filename=/dev/nvme0n1 00:56:21.366 [job1] 00:56:21.366 filename=/dev/nvme0n2 00:56:21.366 [job2] 00:56:21.366 filename=/dev/nvme0n3 00:56:21.366 [job3] 00:56:21.366 filename=/dev/nvme0n4 00:56:21.366 Could not set queue depth (nvme0n1) 00:56:21.366 Could not set queue depth (nvme0n2) 00:56:21.366 Could not set queue depth (nvme0n3) 00:56:21.366 Could not set queue depth (nvme0n4) 00:56:21.366 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:21.366 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:21.366 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:21.366 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:21.366 fio-3.35 00:56:21.366 Starting 4 threads 00:56:22.744 00:56:22.744 job0: (groupid=0, jobs=1): err= 0: pid=69771: Mon Dec 9 05:55:16 2024 00:56:22.744 read: IOPS=2502, BW=9.77MiB/s (10.2MB/s)(9.82MiB/1005msec) 00:56:22.744 slat (usec): min=4, max=8322, avg=231.09, stdev=955.22 00:56:22.744 clat (usec): min=416, max=43520, avg=29250.12, stdev=7139.20 00:56:22.744 lat (usec): min=4720, max=43534, avg=29481.21, stdev=7121.70 00:56:22.744 clat percentiles (usec): 00:56:22.744 | 1.00th=[ 7373], 5.00th=[19268], 10.00th=[21365], 20.00th=[23725], 00:56:22.744 | 30.00th=[24773], 40.00th=[27919], 50.00th=[29492], 60.00th=[31065], 00:56:22.744 | 70.00th=[32113], 80.00th=[34866], 90.00th=[39060], 95.00th=[42730], 00:56:22.744 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:56:22.744 | 99.99th=[43779] 00:56:22.744 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:56:22.744 slat (usec): min=10, max=9433, avg=156.08, stdev=763.84 00:56:22.744 clat (usec): min=12717, max=31689, avg=20731.73, stdev=4623.06 00:56:22.744 lat (usec): min=14269, max=31715, avg=20887.81, stdev=4590.88 00:56:22.744 clat percentiles (usec): 00:56:22.744 | 1.00th=[13566], 5.00th=[15926], 10.00th=[16319], 20.00th=[16712], 00:56:22.744 | 30.00th=[16909], 40.00th=[17171], 50.00th=[19006], 60.00th=[21890], 00:56:22.745 | 70.00th=[23987], 80.00th=[25560], 90.00th=[27132], 95.00th=[29230], 00:56:22.745 | 99.00th=[31065], 99.50th=[31589], 99.90th=[31589], 99.95th=[31589], 00:56:22.745 | 99.99th=[31589] 00:56:22.745 bw ( KiB/s): min= 8192, max=12288, per=15.48%, avg=10240.00, stdev=2896.31, samples=2 00:56:22.745 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:56:22.745 lat (usec) : 500=0.02% 00:56:22.745 lat (msec) : 10=0.63%, 20=29.50%, 50=69.85% 00:56:22.745 cpu : usr=3.09%, sys=6.97%, ctx=228, majf=0, minf=13 00:56:22.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:56:22.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:22.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:22.745 issued rwts: total=2515,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:22.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:22.745 job1: (groupid=0, jobs=1): err= 0: pid=69772: Mon Dec 9 05:55:16 2024 00:56:22.745 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:56:22.745 slat (usec): min=7, max=5169, avg=85.14, stdev=413.07 00:56:22.745 clat (usec): min=6788, max=16339, avg=11220.42, stdev=1299.99 00:56:22.745 lat (usec): min=6823, max=16352, avg=11305.56, stdev=1331.05 00:56:22.745 clat percentiles (usec): 00:56:22.745 | 1.00th=[ 7767], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10552], 00:56:22.745 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:56:22.745 | 70.00th=[11338], 80.00th=[11863], 90.00th=[12911], 95.00th=[13829], 00:56:22.745 | 99.00th=[15270], 99.50th=[16319], 99.90th=[16319], 99.95th=[16319], 00:56:22.745 | 99.99th=[16319] 00:56:22.745 write: IOPS=5846, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1003msec); 0 zone resets 00:56:22.745 slat (usec): min=9, max=6776, avg=82.07, stdev=400.70 00:56:22.745 clat (usec): min=470, max=17667, avg=10853.66, stdev=1470.66 00:56:22.745 lat (usec): min=3840, max=18889, avg=10935.73, stdev=1507.99 00:56:22.745 clat percentiles (usec): 00:56:22.745 | 1.00th=[ 5407], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10028], 00:56:22.745 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:56:22.745 | 70.00th=[11469], 80.00th=[11600], 90.00th=[12256], 95.00th=[13698], 00:56:22.745 | 99.00th=[15139], 99.50th=[15664], 99.90th=[16057], 99.95th=[16057], 00:56:22.745 | 99.99th=[17695] 00:56:22.745 bw ( KiB/s): min=21320, max=24568, per=34.69%, avg=22944.00, stdev=2296.68, samples=2 00:56:22.745 iops : min= 5330, max= 6142, avg=5736.00, stdev=574.17, samples=2 00:56:22.745 lat (usec) : 500=0.01% 00:56:22.745 lat (msec) : 4=0.04%, 10=13.67%, 20=86.27% 00:56:22.745 cpu : usr=4.49%, sys=14.57%, ctx=606, majf=0, minf=14 00:56:22.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:56:22.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:22.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:22.745 issued rwts: total=5632,5864,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:22.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:22.745 job2: (groupid=0, jobs=1): err= 0: pid=69773: Mon Dec 9 05:55:16 2024 00:56:22.745 read: IOPS=4868, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1002msec) 00:56:22.745 slat (usec): min=4, max=3106, avg=97.73, stdev=441.83 00:56:22.745 clat (usec): min=369, max=15348, avg=12801.63, stdev=1246.64 00:56:22.745 lat (usec): min=3347, max=17018, avg=12899.36, stdev=1179.07 00:56:22.745 clat percentiles (usec): 00:56:22.745 | 1.00th=[ 6390], 5.00th=[10683], 10.00th=[11207], 20.00th=[12649], 00:56:22.745 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:56:22.745 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13698], 95.00th=[13829], 00:56:22.745 | 99.00th=[14615], 99.50th=[14746], 99.90th=[15139], 99.95th=[15270], 00:56:22.745 | 99.99th=[15401] 00:56:22.745 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:56:22.745 slat (usec): min=8, max=2960, avg=94.60, stdev=381.87 00:56:22.745 clat (usec): min=9468, max=15257, avg=12527.15, stdev=1222.39 00:56:22.745 lat (usec): min=9934, max=15296, avg=12621.75, stdev=1219.70 00:56:22.745 clat percentiles (usec): 00:56:22.745 | 1.00th=[10290], 5.00th=[10683], 10.00th=[10945], 20.00th=[11207], 00:56:22.745 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12649], 60.00th=[12911], 00:56:22.745 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14091], 95.00th=[14484], 00:56:22.745 | 99.00th=[15008], 99.50th=[15008], 99.90th=[15270], 99.95th=[15270], 00:56:22.745 | 99.99th=[15270] 00:56:22.745 bw ( KiB/s): min=20480, max=20521, per=31.00%, avg=20500.50, stdev=28.99, samples=2 00:56:22.745 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:56:22.745 lat (usec) : 500=0.01% 00:56:22.745 lat (msec) : 4=0.32%, 10=0.66%, 20=99.01% 00:56:22.745 cpu : usr=4.99%, sys=14.07%, ctx=609, majf=0, minf=15 00:56:22.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:56:22.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:22.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:22.745 issued rwts: total=4878,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:22.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:22.745 job3: (groupid=0, jobs=1): err= 0: pid=69774: Mon Dec 9 05:55:16 2024 00:56:22.745 read: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:56:22.745 slat (usec): min=4, max=9259, avg=157.81, stdev=780.51 00:56:22.745 clat (usec): min=1754, max=32292, avg=19668.22, stdev=3354.88 00:56:22.745 lat (usec): min=5276, max=32327, avg=19826.02, stdev=3412.83 00:56:22.745 clat percentiles (usec): 00:56:22.745 | 1.00th=[ 5735], 5.00th=[16057], 10.00th=[16909], 20.00th=[17695], 00:56:22.745 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19006], 60.00th=[19792], 00:56:22.745 | 70.00th=[21103], 80.00th=[22152], 90.00th=[23725], 95.00th=[25822], 00:56:22.745 | 99.00th=[27395], 99.50th=[28181], 99.90th=[31065], 99.95th=[31065], 00:56:22.745 | 99.99th=[32375] 00:56:22.745 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:56:22.745 slat (usec): min=10, max=5914, avg=170.69, stdev=652.26 00:56:22.745 clat (usec): min=12822, max=34063, avg=22921.95, stdev=6180.48 00:56:22.745 lat (usec): min=12842, max=34094, avg=23092.64, stdev=6226.68 00:56:22.745 clat percentiles (usec): 00:56:22.745 | 1.00th=[14222], 5.00th=[15664], 10.00th=[15926], 20.00th=[16712], 00:56:22.745 | 30.00th=[17171], 40.00th=[19530], 50.00th=[22676], 60.00th=[24249], 00:56:22.745 | 70.00th=[26346], 80.00th=[29754], 90.00th=[32637], 95.00th=[33817], 00:56:22.745 | 99.00th=[33817], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:56:22.745 | 99.99th=[33817] 00:56:22.745 bw ( KiB/s): min=12288, max=12312, per=18.60%, avg=12300.00, stdev=16.97, samples=2 00:56:22.745 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:56:22.745 lat (msec) : 2=0.02%, 10=0.71%, 20=52.60%, 50=46.67% 00:56:22.745 cpu : usr=3.09%, sys=9.78%, ctx=337, majf=0, minf=11 00:56:22.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:56:22.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:22.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:22.745 issued rwts: total=2852,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:22.745 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:22.745 00:56:22.745 Run status group 0 (all jobs): 00:56:22.745 READ: bw=61.7MiB/s (64.7MB/s), 9.77MiB/s-21.9MiB/s (10.2MB/s-23.0MB/s), io=62.0MiB (65.0MB), run=1002-1005msec 00:56:22.745 WRITE: bw=64.6MiB/s (67.7MB/s), 9.95MiB/s-22.8MiB/s (10.4MB/s-23.9MB/s), io=64.9MiB (68.1MB), run=1002-1005msec 00:56:22.745 00:56:22.745 Disk stats (read/write): 00:56:22.745 nvme0n1: ios=2098/2226, merge=0/0, ticks=15570/10213, in_queue=25783, util=88.08% 00:56:22.745 nvme0n2: ios=4746/5120, merge=0/0, ticks=24497/24614, in_queue=49111, util=88.47% 00:56:22.745 nvme0n3: ios=4113/4503, merge=0/0, ticks=12375/12457, in_queue=24832, util=89.59% 00:56:22.745 nvme0n4: ios=2560/2615, merge=0/0, ticks=16320/17427, in_queue=33747, util=89.73% 00:56:22.745 05:55:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:56:22.745 [global] 00:56:22.745 thread=1 00:56:22.745 invalidate=1 00:56:22.745 rw=randwrite 00:56:22.745 time_based=1 00:56:22.745 runtime=1 00:56:22.745 ioengine=libaio 00:56:22.745 direct=1 00:56:22.745 bs=4096 00:56:22.745 iodepth=128 00:56:22.745 norandommap=0 00:56:22.745 numjobs=1 00:56:22.745 00:56:22.745 verify_dump=1 00:56:22.745 verify_backlog=512 00:56:22.745 verify_state_save=0 00:56:22.745 do_verify=1 00:56:22.745 verify=crc32c-intel 00:56:22.745 [job0] 00:56:22.745 filename=/dev/nvme0n1 00:56:22.745 [job1] 00:56:22.745 filename=/dev/nvme0n2 00:56:22.745 [job2] 00:56:22.745 filename=/dev/nvme0n3 00:56:22.745 [job3] 00:56:22.745 filename=/dev/nvme0n4 00:56:22.745 Could not set queue depth (nvme0n1) 00:56:22.745 Could not set queue depth (nvme0n2) 00:56:22.745 Could not set queue depth (nvme0n3) 00:56:22.745 Could not set queue depth (nvme0n4) 00:56:22.745 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:22.745 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:22.745 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:22.745 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:56:22.745 fio-3.35 00:56:22.745 Starting 4 threads 00:56:24.125 00:56:24.126 job0: (groupid=0, jobs=1): err= 0: pid=69834: Mon Dec 9 05:55:18 2024 00:56:24.126 read: IOPS=4715, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1004msec) 00:56:24.126 slat (usec): min=3, max=12313, avg=111.91, stdev=699.32 00:56:24.126 clat (usec): min=1488, max=25711, avg=13795.84, stdev=3802.60 00:56:24.126 lat (usec): min=5240, max=25731, avg=13907.75, stdev=3829.38 00:56:24.126 clat percentiles (usec): 00:56:24.126 | 1.00th=[ 5735], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:56:24.126 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:56:24.126 | 70.00th=[15139], 80.00th=[16450], 90.00th=[19792], 95.00th=[22152], 00:56:24.126 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25560], 99.95th=[25822], 00:56:24.126 | 99.99th=[25822] 00:56:24.126 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:56:24.126 slat (usec): min=5, max=9395, avg=84.66, stdev=318.68 00:56:24.126 clat (usec): min=4382, max=25734, avg=12086.11, stdev=2529.37 00:56:24.126 lat (usec): min=4402, max=25752, avg=12170.77, stdev=2551.94 00:56:24.126 clat percentiles (usec): 00:56:24.126 | 1.00th=[ 5145], 5.00th=[ 6521], 10.00th=[ 7767], 20.00th=[10028], 00:56:24.126 | 30.00th=[11863], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:56:24.126 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13960], 95.00th=[14091], 00:56:24.126 | 99.00th=[14353], 99.50th=[14484], 99.90th=[24773], 99.95th=[25297], 00:56:24.126 | 99.99th=[25822] 00:56:24.126 bw ( KiB/s): min=20472, max=20480, per=26.76%, avg=20476.00, stdev= 5.66, samples=2 00:56:24.126 iops : min= 5118, max= 5120, avg=5119.00, stdev= 1.41, samples=2 00:56:24.126 lat (msec) : 2=0.01%, 10=14.60%, 20=80.61%, 50=4.78% 00:56:24.126 cpu : usr=4.79%, sys=11.27%, ctx=745, majf=0, minf=13 00:56:24.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:56:24.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:24.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:24.126 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:24.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:24.126 job1: (groupid=0, jobs=1): err= 0: pid=69835: Mon Dec 9 05:55:18 2024 00:56:24.126 read: IOPS=4831, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1006msec) 00:56:24.126 slat (usec): min=3, max=12208, avg=109.77, stdev=689.28 00:56:24.126 clat (usec): min=3209, max=25202, avg=13757.47, stdev=3623.77 00:56:24.126 lat (usec): min=5191, max=25285, avg=13867.24, stdev=3654.08 00:56:24.126 clat percentiles (usec): 00:56:24.126 | 1.00th=[ 5932], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10814], 00:56:24.126 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13173], 00:56:24.126 | 70.00th=[14877], 80.00th=[16188], 90.00th=[19530], 95.00th=[21627], 00:56:24.126 | 99.00th=[23725], 99.50th=[23987], 99.90th=[25035], 99.95th=[25035], 00:56:24.126 | 99.99th=[25297] 00:56:24.126 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:56:24.126 slat (usec): min=5, max=9354, avg=83.61, stdev=317.61 00:56:24.126 clat (usec): min=4027, max=25054, avg=11823.56, stdev=2607.67 00:56:24.126 lat (usec): min=4076, max=25060, avg=11907.17, stdev=2627.49 00:56:24.126 clat percentiles (usec): 00:56:24.126 | 1.00th=[ 5080], 5.00th=[ 5997], 10.00th=[ 7177], 20.00th=[ 9765], 00:56:24.126 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:56:24.126 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13698], 95.00th=[13829], 00:56:24.126 | 99.00th=[14222], 99.50th=[14222], 99.90th=[24249], 99.95th=[24249], 00:56:24.126 | 99.99th=[25035] 00:56:24.126 bw ( KiB/s): min=20480, max=20521, per=26.79%, avg=20500.50, stdev=28.99, samples=2 00:56:24.126 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:56:24.126 lat (msec) : 4=0.01%, 10=14.41%, 20=81.00%, 50=4.58% 00:56:24.126 cpu : usr=4.68%, sys=12.84%, ctx=766, majf=0, minf=17 00:56:24.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:56:24.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:24.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:24.126 issued rwts: total=4860,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:24.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:24.126 job2: (groupid=0, jobs=1): err= 0: pid=69836: Mon Dec 9 05:55:18 2024 00:56:24.126 read: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec) 00:56:24.126 slat (usec): min=5, max=13545, avg=126.88, stdev=826.70 00:56:24.126 clat (usec): min=5879, max=28083, avg=15863.96, stdev=3956.87 00:56:24.126 lat (usec): min=5894, max=28110, avg=15990.83, stdev=3994.46 00:56:24.126 clat percentiles (usec): 00:56:24.126 | 1.00th=[ 6521], 5.00th=[11469], 10.00th=[11994], 20.00th=[12518], 00:56:24.126 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[15139], 00:56:24.126 | 70.00th=[16712], 80.00th=[18744], 90.00th=[21890], 95.00th=[24773], 00:56:24.126 | 99.00th=[27395], 99.50th=[27657], 99.90th=[28181], 99.95th=[28181], 00:56:24.126 | 99.99th=[28181] 00:56:24.126 write: IOPS=4456, BW=17.4MiB/s (18.3MB/s)(17.6MiB/1012msec); 0 zone resets 00:56:24.126 slat (usec): min=5, max=11110, avg=98.49, stdev=403.94 00:56:24.126 clat (usec): min=5370, max=28008, avg=14035.31, stdev=3170.05 00:56:24.126 lat (usec): min=5392, max=28020, avg=14133.80, stdev=3197.78 00:56:24.126 clat percentiles (usec): 00:56:24.126 | 1.00th=[ 6063], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[11731], 00:56:24.126 | 30.00th=[14091], 40.00th=[14877], 50.00th=[15270], 60.00th=[15401], 00:56:24.126 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16057], 95.00th=[16319], 00:56:24.126 | 99.00th=[23987], 99.50th=[25560], 99.90th=[27657], 99.95th=[27919], 00:56:24.126 | 99.99th=[27919] 00:56:24.126 bw ( KiB/s): min=17352, max=17747, per=22.94%, avg=17549.50, stdev=279.31, samples=2 00:56:24.126 iops : min= 4338, max= 4436, avg=4387.00, stdev=69.30, samples=2 00:56:24.126 lat (msec) : 10=8.62%, 20=83.99%, 50=7.39% 00:56:24.126 cpu : usr=4.65%, sys=10.78%, ctx=632, majf=0, minf=11 00:56:24.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:56:24.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:24.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:24.126 issued rwts: total=4096,4510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:24.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:24.126 job3: (groupid=0, jobs=1): err= 0: pid=69837: Mon Dec 9 05:55:18 2024 00:56:24.126 read: IOPS=4171, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1012msec) 00:56:24.126 slat (usec): min=3, max=14190, avg=127.25, stdev=802.96 00:56:24.126 clat (usec): min=1301, max=29803, avg=15657.22, stdev=4045.57 00:56:24.126 lat (usec): min=5385, max=29819, avg=15784.47, stdev=4078.78 00:56:24.126 clat percentiles (usec): 00:56:24.126 | 1.00th=[ 6259], 5.00th=[11207], 10.00th=[11469], 20.00th=[12649], 00:56:24.126 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14615], 60.00th=[15008], 00:56:24.126 | 70.00th=[16581], 80.00th=[18482], 90.00th=[21627], 95.00th=[24511], 00:56:24.126 | 99.00th=[27132], 99.50th=[27395], 99.90th=[29754], 99.95th=[29754], 00:56:24.126 | 99.99th=[29754] 00:56:24.126 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:56:24.126 slat (usec): min=3, max=11090, avg=94.70, stdev=385.59 00:56:24.126 clat (usec): min=4303, max=29700, avg=13471.79, stdev=2935.46 00:56:24.126 lat (usec): min=4335, max=29717, avg=13566.49, stdev=2966.44 00:56:24.126 clat percentiles (usec): 00:56:24.126 | 1.00th=[ 5473], 5.00th=[ 6849], 10.00th=[ 8225], 20.00th=[11338], 00:56:24.126 | 30.00th=[13698], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:56:24.126 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15664], 95.00th=[15926], 00:56:24.126 | 99.00th=[16450], 99.50th=[16909], 99.90th=[27132], 99.95th=[27395], 00:56:24.126 | 99.99th=[29754] 00:56:24.126 bw ( KiB/s): min=18240, max=18645, per=24.10%, avg=18442.50, stdev=286.38, samples=2 00:56:24.126 iops : min= 4560, max= 4661, avg=4610.50, stdev=71.42, samples=2 00:56:24.126 lat (msec) : 2=0.01%, 10=8.90%, 20=84.04%, 50=7.04% 00:56:24.126 cpu : usr=5.34%, sys=10.09%, ctx=646, majf=0, minf=12 00:56:24.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:56:24.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:24.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:56:24.126 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:24.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:56:24.126 00:56:24.126 Run status group 0 (all jobs): 00:56:24.126 READ: bw=69.1MiB/s (72.5MB/s), 15.8MiB/s-18.9MiB/s (16.6MB/s-19.8MB/s), io=70.0MiB (73.4MB), run=1004-1012msec 00:56:24.126 WRITE: bw=74.7MiB/s (78.3MB/s), 17.4MiB/s-19.9MiB/s (18.3MB/s-20.9MB/s), io=75.6MiB (79.3MB), run=1004-1012msec 00:56:24.126 00:56:24.126 Disk stats (read/write): 00:56:24.126 nvme0n1: ios=4146/4439, merge=0/0, ticks=52603/51643, in_queue=104246, util=88.28% 00:56:24.126 nvme0n2: ios=4136/4495, merge=0/0, ticks=51985/51336, in_queue=103321, util=88.47% 00:56:24.126 nvme0n3: ios=3597/3759, merge=0/0, ticks=53533/50436, in_queue=103969, util=89.59% 00:56:24.126 nvme0n4: ios=3584/3967, merge=0/0, ticks=52482/51507, in_queue=103989, util=89.74% 00:56:24.126 05:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:56:24.126 05:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69850 00:56:24.126 05:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:56:24.126 05:55:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:56:24.126 [global] 00:56:24.126 thread=1 00:56:24.126 invalidate=1 00:56:24.126 rw=read 00:56:24.126 time_based=1 00:56:24.126 runtime=10 00:56:24.126 ioengine=libaio 00:56:24.126 direct=1 00:56:24.126 bs=4096 00:56:24.126 iodepth=1 00:56:24.126 norandommap=1 00:56:24.126 numjobs=1 00:56:24.126 00:56:24.126 [job0] 00:56:24.127 filename=/dev/nvme0n1 00:56:24.127 [job1] 00:56:24.127 filename=/dev/nvme0n2 00:56:24.127 [job2] 00:56:24.127 filename=/dev/nvme0n3 00:56:24.127 [job3] 00:56:24.127 filename=/dev/nvme0n4 00:56:24.127 Could not set queue depth (nvme0n1) 00:56:24.127 Could not set queue depth (nvme0n2) 00:56:24.127 Could not set queue depth (nvme0n3) 00:56:24.127 Could not set queue depth (nvme0n4) 00:56:24.127 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:24.127 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:24.127 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:24.127 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:56:24.127 fio-3.35 00:56:24.127 Starting 4 threads 00:56:27.415 05:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:56:27.415 fio: pid=69895, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:56:27.415 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40005632, buflen=4096 00:56:27.415 05:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:56:27.674 fio: pid=69893, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:56:27.674 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=67342336, buflen=4096 00:56:27.674 05:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:56:27.674 05:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:56:27.674 fio: pid=69890, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:56:27.674 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11931648, buflen=4096 00:56:27.933 05:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:56:27.933 05:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:56:28.191 fio: pid=69891, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:56:28.191 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52916224, buflen=4096 00:56:28.191 05:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:56:28.191 05:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:56:28.191 00:56:28.191 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69890: Mon Dec 9 05:55:22 2024 00:56:28.191 read: IOPS=5457, BW=21.3MiB/s (22.4MB/s)(75.4MiB/3536msec) 00:56:28.191 slat (usec): min=9, max=11454, avg=16.33, stdev=136.02 00:56:28.191 clat (usec): min=116, max=42152, avg=165.63, stdev=303.56 00:56:28.191 lat (usec): min=130, max=42168, avg=181.96, stdev=333.67 00:56:28.191 clat percentiles (usec): 00:56:28.191 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:56:28.191 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:56:28.191 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 200], 00:56:28.191 | 99.00th=[ 245], 99.50th=[ 260], 99.90th=[ 289], 99.95th=[ 445], 00:56:28.191 | 99.99th=[ 1614] 00:56:28.191 bw ( KiB/s): min=21496, max=23056, per=36.50%, avg=22373.33, stdev=577.17, samples=6 00:56:28.191 iops : min= 5374, max= 5764, avg=5593.33, stdev=144.29, samples=6 00:56:28.191 lat (usec) : 250=99.25%, 500=0.70%, 750=0.02%, 1000=0.01% 00:56:28.191 lat (msec) : 2=0.02%, 50=0.01% 00:56:28.191 cpu : usr=1.44%, sys=6.42%, ctx=19304, majf=0, minf=1 00:56:28.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:28.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:28.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:28.191 issued rwts: total=19298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:28.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:28.191 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69891: Mon Dec 9 05:55:22 2024 00:56:28.191 read: IOPS=3388, BW=13.2MiB/s (13.9MB/s)(50.5MiB/3813msec) 00:56:28.191 slat (usec): min=7, max=11381, avg=22.25, stdev=184.63 00:56:28.191 clat (usec): min=119, max=42193, avg=271.15, stdev=383.52 00:56:28.191 lat (usec): min=131, max=50041, avg=293.40, stdev=481.74 00:56:28.191 clat percentiles (usec): 00:56:28.191 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 161], 20.00th=[ 249], 00:56:28.191 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:56:28.191 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 322], 00:56:28.191 | 99.00th=[ 355], 99.50th=[ 420], 99.90th=[ 1582], 99.95th=[ 3097], 00:56:28.191 | 99.99th=[ 5932] 00:56:28.191 bw ( KiB/s): min=12520, max=14519, per=21.33%, avg=13070.71, stdev=677.75, samples=7 00:56:28.191 iops : min= 3130, max= 3629, avg=3267.57, stdev=169.17, samples=7 00:56:28.191 lat (usec) : 250=20.67%, 500=79.04%, 750=0.14%, 1000=0.04% 00:56:28.191 lat (msec) : 2=0.04%, 4=0.05%, 10=0.01%, 50=0.01% 00:56:28.191 cpu : usr=0.94%, sys=5.40%, ctx=12934, majf=0, minf=1 00:56:28.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:28.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:28.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:28.191 issued rwts: total=12920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:28.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:28.191 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69893: Mon Dec 9 05:55:22 2024 00:56:28.191 read: IOPS=5006, BW=19.6MiB/s (20.5MB/s)(64.2MiB/3284msec) 00:56:28.191 slat (usec): min=10, max=12710, avg=15.42, stdev=113.19 00:56:28.191 clat (usec): min=139, max=5478, avg=183.04, stdev=123.86 00:56:28.191 lat (usec): min=151, max=12950, avg=198.47, stdev=168.50 00:56:28.191 clat percentiles (usec): 00:56:28.191 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:56:28.191 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:56:28.191 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 208], 95.00th=[ 265], 00:56:28.191 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 2024], 99.95th=[ 3425], 00:56:28.191 | 99.99th=[ 5473] 00:56:28.191 bw ( KiB/s): min=18216, max=21968, per=33.75%, avg=20685.33, stdev=1359.10, samples=6 00:56:28.191 iops : min= 4554, max= 5492, avg=5171.33, stdev=339.78, samples=6 00:56:28.191 lat (usec) : 250=93.23%, 500=6.56%, 750=0.07%, 1000=0.01% 00:56:28.191 lat (msec) : 2=0.02%, 4=0.07%, 10=0.03% 00:56:28.191 cpu : usr=1.07%, sys=6.03%, ctx=16455, majf=0, minf=2 00:56:28.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:28.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:28.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:28.191 issued rwts: total=16442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:28.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:28.191 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69895: Mon Dec 9 05:55:22 2024 00:56:28.191 read: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(38.2MiB/2971msec) 00:56:28.191 slat (nsec): min=11654, max=63847, avg=15267.37, stdev=4753.35 00:56:28.191 clat (usec): min=144, max=2196, avg=287.47, stdev=48.46 00:56:28.191 lat (usec): min=156, max=2223, avg=302.74, stdev=48.85 00:56:28.191 clat percentiles (usec): 00:56:28.191 | 1.00th=[ 155], 5.00th=[ 182], 10.00th=[ 260], 20.00th=[ 273], 00:56:28.191 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:56:28.191 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 330], 00:56:28.191 | 99.00th=[ 396], 99.50th=[ 461], 99.90th=[ 668], 99.95th=[ 717], 00:56:28.191 | 99.99th=[ 2212] 00:56:28.191 bw ( KiB/s): min=12616, max=13008, per=20.99%, avg=12862.40, stdev=155.89, samples=5 00:56:28.191 iops : min= 3154, max= 3252, avg=3215.60, stdev=38.97, samples=5 00:56:28.191 lat (usec) : 250=7.26%, 500=92.41%, 750=0.28%, 1000=0.02% 00:56:28.191 lat (msec) : 2=0.01%, 4=0.01% 00:56:28.191 cpu : usr=0.88%, sys=4.07%, ctx=9769, majf=0, minf=2 00:56:28.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:56:28.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:28.191 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:56:28.191 issued rwts: total=9768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:56:28.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:56:28.191 00:56:28.191 Run status group 0 (all jobs): 00:56:28.191 READ: bw=59.9MiB/s (62.8MB/s), 12.8MiB/s-21.3MiB/s (13.5MB/s-22.4MB/s), io=228MiB (239MB), run=2971-3813msec 00:56:28.191 00:56:28.191 Disk stats (read/write): 00:56:28.191 nvme0n1: ios=18610/0, merge=0/0, ticks=3166/0, in_queue=3166, util=95.34% 00:56:28.191 nvme0n2: ios=11858/0, merge=0/0, ticks=3422/0, in_queue=3422, util=95.48% 00:56:28.191 nvme0n3: ios=15857/0, merge=0/0, ticks=2904/0, in_queue=2904, util=95.62% 00:56:28.191 nvme0n4: ios=9224/0, merge=0/0, ticks=2781/0, in_queue=2781, util=96.66% 00:56:28.450 05:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:56:28.450 05:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:56:28.708 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:56:28.708 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:56:28.967 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:56:28.967 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:56:29.225 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:56:29.225 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69850 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:56:29.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:56:29.485 nvmf hotplug test: fio failed as expected 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:56:29.485 05:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:29.744 rmmod nvme_tcp 00:56:29.744 rmmod nvme_fabrics 00:56:29.744 rmmod nvme_keyring 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69366 ']' 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69366 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69366 ']' 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69366 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69366 00:56:29.744 killing process with pid 69366 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69366' 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69366 00:56:29.744 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69366 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:56:30.003 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:56:30.262 00:56:30.262 real 0m19.292s 00:56:30.262 user 1m12.543s 00:56:30.262 sys 0m9.474s 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:30.262 ************************************ 00:56:30.262 END TEST nvmf_fio_target 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:56:30.262 ************************************ 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:56:30.262 ************************************ 00:56:30.262 START TEST nvmf_bdevio 00:56:30.262 ************************************ 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:56:30.262 * Looking for test storage... 00:56:30.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:56:30.262 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:30.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:30.522 --rc genhtml_branch_coverage=1 00:56:30.522 --rc genhtml_function_coverage=1 00:56:30.522 --rc genhtml_legend=1 00:56:30.522 --rc geninfo_all_blocks=1 00:56:30.522 --rc geninfo_unexecuted_blocks=1 00:56:30.522 00:56:30.522 ' 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:30.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:30.522 --rc genhtml_branch_coverage=1 00:56:30.522 --rc genhtml_function_coverage=1 00:56:30.522 --rc genhtml_legend=1 00:56:30.522 --rc geninfo_all_blocks=1 00:56:30.522 --rc geninfo_unexecuted_blocks=1 00:56:30.522 00:56:30.522 ' 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:30.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:30.522 --rc genhtml_branch_coverage=1 00:56:30.522 --rc genhtml_function_coverage=1 00:56:30.522 --rc genhtml_legend=1 00:56:30.522 --rc geninfo_all_blocks=1 00:56:30.522 --rc geninfo_unexecuted_blocks=1 00:56:30.522 00:56:30.522 ' 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:30.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:30.522 --rc genhtml_branch_coverage=1 00:56:30.522 --rc genhtml_function_coverage=1 00:56:30.522 --rc genhtml_legend=1 00:56:30.522 --rc geninfo_all_blocks=1 00:56:30.522 --rc geninfo_unexecuted_blocks=1 00:56:30.522 00:56:30.522 ' 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:30.522 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:30.523 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:56:30.523 Cannot find device "nvmf_init_br" 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:56:30.523 Cannot find device "nvmf_init_br2" 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:56:30.523 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:56:30.524 Cannot find device "nvmf_tgt_br" 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:56:30.524 Cannot find device "nvmf_tgt_br2" 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:56:30.524 Cannot find device "nvmf_init_br" 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:56:30.524 Cannot find device "nvmf_init_br2" 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:56:30.524 Cannot find device "nvmf_tgt_br" 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:56:30.524 Cannot find device "nvmf_tgt_br2" 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:56:30.524 05:55:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:56:30.524 Cannot find device "nvmf_br" 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:56:30.524 Cannot find device "nvmf_init_if" 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:56:30.524 Cannot find device "nvmf_init_if2" 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:30.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:30.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:56:30.524 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:56:30.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:30.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:56:30.788 00:56:30.788 --- 10.0.0.3 ping statistics --- 00:56:30.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:30.788 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:56:30.788 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:56:30.788 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:56:30.788 00:56:30.788 --- 10.0.0.4 ping statistics --- 00:56:30.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:30.788 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:30.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:30.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:56:30.788 00:56:30.788 --- 10.0.0.1 ping statistics --- 00:56:30.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:30.788 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:56:30.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:30.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:56:30.788 00:56:30.788 --- 10.0.0.2 ping statistics --- 00:56:30.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:30.788 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70275 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70275 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:56:30.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70275 ']' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:30.788 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:30.788 [2024-12-09 05:55:25.338760] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:56:30.788 [2024-12-09 05:55:25.338848] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:31.046 [2024-12-09 05:55:25.493448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:31.046 [2024-12-09 05:55:25.534361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:31.046 [2024-12-09 05:55:25.534427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:31.046 [2024-12-09 05:55:25.534443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:31.046 [2024-12-09 05:55:25.534454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:31.046 [2024-12-09 05:55:25.534463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:31.046 [2024-12-09 05:55:25.535404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:56:31.046 [2024-12-09 05:55:25.535547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:56:31.046 [2024-12-09 05:55:25.537682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:56:31.046 [2024-12-09 05:55:25.537710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:31.305 [2024-12-09 05:55:25.676241] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:31.305 Malloc0 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:31.305 [2024-12-09 05:55:25.739597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:56:31.305 { 00:56:31.305 "params": { 00:56:31.305 "name": "Nvme$subsystem", 00:56:31.305 "trtype": "$TEST_TRANSPORT", 00:56:31.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:56:31.305 "adrfam": "ipv4", 00:56:31.305 "trsvcid": "$NVMF_PORT", 00:56:31.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:56:31.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:56:31.305 "hdgst": ${hdgst:-false}, 00:56:31.305 "ddgst": ${ddgst:-false} 00:56:31.305 }, 00:56:31.305 "method": "bdev_nvme_attach_controller" 00:56:31.305 } 00:56:31.305 EOF 00:56:31.305 )") 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:56:31.305 05:55:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:56:31.305 "params": { 00:56:31.305 "name": "Nvme1", 00:56:31.305 "trtype": "tcp", 00:56:31.305 "traddr": "10.0.0.3", 00:56:31.305 "adrfam": "ipv4", 00:56:31.305 "trsvcid": "4420", 00:56:31.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:56:31.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:56:31.305 "hdgst": false, 00:56:31.305 "ddgst": false 00:56:31.305 }, 00:56:31.305 "method": "bdev_nvme_attach_controller" 00:56:31.305 }' 00:56:31.305 [2024-12-09 05:55:25.801690] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:56:31.305 [2024-12-09 05:55:25.801773] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70314 ] 00:56:31.564 [2024-12-09 05:55:25.948269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:56:31.564 [2024-12-09 05:55:25.979308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:31.564 [2024-12-09 05:55:25.979440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:56:31.564 [2024-12-09 05:55:25.979732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:31.564 I/O targets: 00:56:31.564 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:56:31.564 00:56:31.564 00:56:31.564 CUnit - A unit testing framework for C - Version 2.1-3 00:56:31.564 http://cunit.sourceforge.net/ 00:56:31.564 00:56:31.564 00:56:31.564 Suite: bdevio tests on: Nvme1n1 00:56:31.822 Test: blockdev write read block ...passed 00:56:31.822 Test: blockdev write zeroes read block ...passed 00:56:31.822 Test: blockdev write zeroes read no split ...passed 00:56:31.822 Test: blockdev write zeroes read split ...passed 00:56:31.822 Test: blockdev write zeroes read split partial ...passed 00:56:31.822 Test: blockdev reset ...[2024-12-09 05:55:26.226709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:56:31.822 [2024-12-09 05:55:26.226958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1522f50 (9): Bad file descriptor 00:56:31.822 [2024-12-09 05:55:26.241882] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:56:31.822 passed 00:56:31.822 Test: blockdev write read 8 blocks ...passed 00:56:31.822 Test: blockdev write read size > 128k ...passed 00:56:31.822 Test: blockdev write read invalid size ...passed 00:56:31.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:56:31.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:56:31.822 Test: blockdev write read max offset ...passed 00:56:31.823 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:56:31.823 Test: blockdev writev readv 8 blocks ...passed 00:56:31.823 Test: blockdev writev readv 30 x 1block ...passed 00:56:32.081 Test: blockdev writev readv block ...passed 00:56:32.081 Test: blockdev writev readv size > 128k ...passed 00:56:32.081 Test: blockdev writev readv size > 128k in two iovs ...passed 00:56:32.081 Test: blockdev comparev and writev ...[2024-12-09 05:55:26.416947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:56:32.081 [2024-12-09 05:55:26.417015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:32.081 [2024-12-09 05:55:26.417034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:56:32.081 [2024-12-09 05:55:26.417044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:32.081 [2024-12-09 05:55:26.417490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:56:32.081 [2024-12-09 05:55:26.417521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:32.081 [2024-12-09 05:55:26.417538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:56:32.081 [2024-12-09 05:55:26.417548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:32.081 [2024-12-09 05:55:26.418055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:56:32.081 [2024-12-09 05:55:26.418086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:32.081 [2024-12-09 05:55:26.418104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:56:32.081 [2024-12-09 05:55:26.418114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:32.081 [2024-12-09 05:55:26.418570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:56:32.081 [2024-12-09 05:55:26.418599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:32.081 [2024-12-09 05:55:26.418616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:56:32.081 [2024-12-09 05:55:26.418626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:32.081 passed 00:56:32.082 Test: blockdev nvme passthru rw ...passed 00:56:32.082 Test: blockdev nvme passthru vendor specific ...[2024-12-09 05:55:26.503023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:56:32.082 [2024-12-09 05:55:26.503050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:32.082 [2024-12-09 05:55:26.503419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:56:32.082 [2024-12-09 05:55:26.503448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:32.082 [2024-12-09 05:55:26.503823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:56:32.082 [2024-12-09 05:55:26.503852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:32.082 [2024-12-09 05:55:26.504098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:56:32.082 [2024-12-09 05:55:26.504214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:32.082 passed 00:56:32.082 Test: blockdev nvme admin passthru ...passed 00:56:32.082 Test: blockdev copy ...passed 00:56:32.082 00:56:32.082 Run Summary: Type Total Ran Passed Failed Inactive 00:56:32.082 suites 1 1 n/a 0 0 00:56:32.082 tests 23 23 23 0 0 00:56:32.082 asserts 152 152 152 0 n/a 00:56:32.082 00:56:32.082 Elapsed time = 0.889 seconds 00:56:32.340 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:56:32.340 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:32.340 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:32.341 rmmod nvme_tcp 00:56:32.341 rmmod nvme_fabrics 00:56:32.341 rmmod nvme_keyring 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70275 ']' 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70275 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70275 ']' 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70275 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70275 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:56:32.341 killing process with pid 70275 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70275' 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70275 00:56:32.341 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70275 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:56:32.599 05:55:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:32.599 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:32.857 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:56:32.857 00:56:32.857 real 0m2.486s 00:56:32.857 user 0m7.549s 00:56:32.857 sys 0m0.729s 00:56:32.857 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:32.857 ************************************ 00:56:32.857 END TEST nvmf_bdevio 00:56:32.857 05:55:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:56:32.857 ************************************ 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:56:32.858 00:56:32.858 real 3m24.885s 00:56:32.858 user 10m42.726s 00:56:32.858 sys 1m1.257s 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:56:32.858 ************************************ 00:56:32.858 END TEST nvmf_target_core 00:56:32.858 ************************************ 00:56:32.858 05:55:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:56:32.858 05:55:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:32.858 05:55:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:32.858 05:55:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:56:32.858 ************************************ 00:56:32.858 START TEST nvmf_target_extra 00:56:32.858 ************************************ 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:56:32.858 * Looking for test storage... 00:56:32.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:32.858 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:33.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:33.118 --rc genhtml_branch_coverage=1 00:56:33.118 --rc genhtml_function_coverage=1 00:56:33.118 --rc genhtml_legend=1 00:56:33.118 --rc geninfo_all_blocks=1 00:56:33.118 --rc geninfo_unexecuted_blocks=1 00:56:33.118 00:56:33.118 ' 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:33.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:33.118 --rc genhtml_branch_coverage=1 00:56:33.118 --rc genhtml_function_coverage=1 00:56:33.118 --rc genhtml_legend=1 00:56:33.118 --rc geninfo_all_blocks=1 00:56:33.118 --rc geninfo_unexecuted_blocks=1 00:56:33.118 00:56:33.118 ' 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:33.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:33.118 --rc genhtml_branch_coverage=1 00:56:33.118 --rc genhtml_function_coverage=1 00:56:33.118 --rc genhtml_legend=1 00:56:33.118 --rc geninfo_all_blocks=1 00:56:33.118 --rc geninfo_unexecuted_blocks=1 00:56:33.118 00:56:33.118 ' 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:33.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:33.118 --rc genhtml_branch_coverage=1 00:56:33.118 --rc genhtml_function_coverage=1 00:56:33.118 --rc genhtml_legend=1 00:56:33.118 --rc geninfo_all_blocks=1 00:56:33.118 --rc geninfo_unexecuted_blocks=1 00:56:33.118 00:56:33.118 ' 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:33.118 05:55:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:33.119 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:56:33.119 ************************************ 00:56:33.119 START TEST nvmf_example 00:56:33.119 ************************************ 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:56:33.119 * Looking for test storage... 00:56:33.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:33.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:33.119 --rc genhtml_branch_coverage=1 00:56:33.119 --rc genhtml_function_coverage=1 00:56:33.119 --rc genhtml_legend=1 00:56:33.119 --rc geninfo_all_blocks=1 00:56:33.119 --rc geninfo_unexecuted_blocks=1 00:56:33.119 00:56:33.119 ' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:33.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:33.119 --rc genhtml_branch_coverage=1 00:56:33.119 --rc genhtml_function_coverage=1 00:56:33.119 --rc genhtml_legend=1 00:56:33.119 --rc geninfo_all_blocks=1 00:56:33.119 --rc geninfo_unexecuted_blocks=1 00:56:33.119 00:56:33.119 ' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:33.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:33.119 --rc genhtml_branch_coverage=1 00:56:33.119 --rc genhtml_function_coverage=1 00:56:33.119 --rc genhtml_legend=1 00:56:33.119 --rc geninfo_all_blocks=1 00:56:33.119 --rc geninfo_unexecuted_blocks=1 00:56:33.119 00:56:33.119 ' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:33.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:33.119 --rc genhtml_branch_coverage=1 00:56:33.119 --rc genhtml_function_coverage=1 00:56:33.119 --rc genhtml_legend=1 00:56:33.119 --rc geninfo_all_blocks=1 00:56:33.119 --rc geninfo_unexecuted_blocks=1 00:56:33.119 00:56:33.119 ' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:33.119 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:33.120 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:33.120 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:56:33.379 Cannot find device "nvmf_init_br" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:56:33.379 Cannot find device "nvmf_init_br2" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:56:33.379 Cannot find device "nvmf_tgt_br" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:56:33.379 Cannot find device "nvmf_tgt_br2" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:56:33.379 Cannot find device "nvmf_init_br" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:56:33.379 Cannot find device "nvmf_init_br2" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:56:33.379 Cannot find device "nvmf_tgt_br" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:56:33.379 Cannot find device "nvmf_tgt_br2" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:56:33.379 Cannot find device "nvmf_br" 00:56:33.379 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:56:33.380 Cannot find device "nvmf_init_if" 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:56:33.380 Cannot find device "nvmf_init_if2" 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:33.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:33.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:56:33.380 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:33.639 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:33.639 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:33.639 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:56:33.639 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:56:33.639 05:55:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:56:33.639 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:33.639 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:56:33.639 00:56:33.639 --- 10.0.0.3 ping statistics --- 00:56:33.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:33.639 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:56:33.639 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:56:33.639 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:56:33.639 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:56:33.639 00:56:33.639 --- 10.0.0.4 ping statistics --- 00:56:33.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:33.639 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:33.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:33.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:56:33.640 00:56:33.640 --- 10.0.0.1 ping statistics --- 00:56:33.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:33.640 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:56:33.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:33.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:56:33.640 00:56:33.640 --- 10.0.0.2 ping statistics --- 00:56:33.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:33.640 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=70613 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 70613 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 70613 ']' 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:33.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:33.640 05:55:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:56:35.019 05:55:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:56:45.009 Initializing NVMe Controllers 00:56:45.009 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:56:45.009 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:56:45.009 Initialization complete. Launching workers. 00:56:45.009 ======================================================== 00:56:45.009 Latency(us) 00:56:45.009 Device Information : IOPS MiB/s Average min max 00:56:45.009 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16463.90 64.31 3888.52 557.14 20269.99 00:56:45.009 ======================================================== 00:56:45.009 Total : 16463.90 64.31 3888.52 557.14 20269.99 00:56:45.009 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:56:45.269 rmmod nvme_tcp 00:56:45.269 rmmod nvme_fabrics 00:56:45.269 rmmod nvme_keyring 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 70613 ']' 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 70613 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 70613 ']' 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 70613 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70613 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:56:45.269 killing process with pid 70613 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70613' 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 70613 00:56:45.269 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 70613 00:56:45.529 nvmf threads initialize successfully 00:56:45.529 bdev subsystem init successfully 00:56:45.529 created a nvmf target service 00:56:45.529 create targets's poll groups done 00:56:45.529 all subsystems of target started 00:56:45.529 nvmf target is running 00:56:45.529 all subsystems of target stopped 00:56:45.529 destroy targets's poll groups done 00:56:45.529 destroyed the nvmf target service 00:56:45.529 bdev subsystem finish successfully 00:56:45.529 nvmf threads destroy successfully 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:56:45.529 05:55:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:56:45.529 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:56:45.529 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:56:45.529 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:45.529 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:45.529 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:56:45.529 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:45.529 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:45.529 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:45.788 ************************************ 00:56:45.788 END TEST nvmf_example 00:56:45.788 ************************************ 00:56:45.788 00:56:45.788 real 0m12.690s 00:56:45.788 user 0m44.769s 00:56:45.788 sys 0m2.072s 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:56:45.788 ************************************ 00:56:45.788 START TEST nvmf_filesystem 00:56:45.788 ************************************ 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:56:45.788 * Looking for test storage... 00:56:45.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:56:45.788 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:46.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:46.051 --rc genhtml_branch_coverage=1 00:56:46.051 --rc genhtml_function_coverage=1 00:56:46.051 --rc genhtml_legend=1 00:56:46.051 --rc geninfo_all_blocks=1 00:56:46.051 --rc geninfo_unexecuted_blocks=1 00:56:46.051 00:56:46.051 ' 00:56:46.051 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:46.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:46.051 --rc genhtml_branch_coverage=1 00:56:46.051 --rc genhtml_function_coverage=1 00:56:46.051 --rc genhtml_legend=1 00:56:46.051 --rc geninfo_all_blocks=1 00:56:46.051 --rc geninfo_unexecuted_blocks=1 00:56:46.051 00:56:46.052 ' 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:46.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:46.052 --rc genhtml_branch_coverage=1 00:56:46.052 --rc genhtml_function_coverage=1 00:56:46.052 --rc genhtml_legend=1 00:56:46.052 --rc geninfo_all_blocks=1 00:56:46.052 --rc geninfo_unexecuted_blocks=1 00:56:46.052 00:56:46.052 ' 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:46.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:46.052 --rc genhtml_branch_coverage=1 00:56:46.052 --rc genhtml_function_coverage=1 00:56:46.052 --rc genhtml_legend=1 00:56:46.052 --rc geninfo_all_blocks=1 00:56:46.052 --rc geninfo_unexecuted_blocks=1 00:56:46.052 00:56:46.052 ' 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:56:46.052 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:56:46.053 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:56:46.053 #define SPDK_CONFIG_H 00:56:46.053 #define SPDK_CONFIG_AIO_FSDEV 1 00:56:46.053 #define SPDK_CONFIG_APPS 1 00:56:46.053 #define SPDK_CONFIG_ARCH native 00:56:46.053 #undef SPDK_CONFIG_ASAN 00:56:46.053 #define SPDK_CONFIG_AVAHI 1 00:56:46.053 #undef SPDK_CONFIG_CET 00:56:46.053 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:56:46.053 #define SPDK_CONFIG_COVERAGE 1 00:56:46.053 #define SPDK_CONFIG_CROSS_PREFIX 00:56:46.053 #undef SPDK_CONFIG_CRYPTO 00:56:46.053 #undef SPDK_CONFIG_CRYPTO_MLX5 00:56:46.053 #undef SPDK_CONFIG_CUSTOMOCF 00:56:46.053 #undef SPDK_CONFIG_DAOS 00:56:46.053 #define SPDK_CONFIG_DAOS_DIR 00:56:46.053 #define SPDK_CONFIG_DEBUG 1 00:56:46.053 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:56:46.053 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:56:46.053 #define SPDK_CONFIG_DPDK_INC_DIR 00:56:46.053 #define SPDK_CONFIG_DPDK_LIB_DIR 00:56:46.053 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:56:46.053 #undef SPDK_CONFIG_DPDK_UADK 00:56:46.053 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:56:46.053 #define SPDK_CONFIG_EXAMPLES 1 00:56:46.053 #undef SPDK_CONFIG_FC 00:56:46.053 #define SPDK_CONFIG_FC_PATH 00:56:46.053 #define SPDK_CONFIG_FIO_PLUGIN 1 00:56:46.053 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:56:46.053 #define SPDK_CONFIG_FSDEV 1 00:56:46.053 #undef SPDK_CONFIG_FUSE 00:56:46.053 #undef SPDK_CONFIG_FUZZER 00:56:46.053 #define SPDK_CONFIG_FUZZER_LIB 00:56:46.053 #define SPDK_CONFIG_GOLANG 1 00:56:46.053 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:56:46.053 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:56:46.053 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:56:46.053 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:56:46.053 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:56:46.053 #undef SPDK_CONFIG_HAVE_LIBBSD 00:56:46.053 #undef SPDK_CONFIG_HAVE_LZ4 00:56:46.053 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:56:46.053 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:56:46.053 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:56:46.053 #define SPDK_CONFIG_IDXD 1 00:56:46.053 #define SPDK_CONFIG_IDXD_KERNEL 1 00:56:46.053 #undef SPDK_CONFIG_IPSEC_MB 00:56:46.053 #define SPDK_CONFIG_IPSEC_MB_DIR 00:56:46.053 #define SPDK_CONFIG_ISAL 1 00:56:46.053 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:56:46.053 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:56:46.053 #define SPDK_CONFIG_LIBDIR 00:56:46.053 #undef SPDK_CONFIG_LTO 00:56:46.054 #define SPDK_CONFIG_MAX_LCORES 128 00:56:46.054 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:56:46.054 #define SPDK_CONFIG_NVME_CUSE 1 00:56:46.054 #undef SPDK_CONFIG_OCF 00:56:46.054 #define SPDK_CONFIG_OCF_PATH 00:56:46.054 #define SPDK_CONFIG_OPENSSL_PATH 00:56:46.054 #undef SPDK_CONFIG_PGO_CAPTURE 00:56:46.054 #define SPDK_CONFIG_PGO_DIR 00:56:46.054 #undef SPDK_CONFIG_PGO_USE 00:56:46.054 #define SPDK_CONFIG_PREFIX /usr/local 00:56:46.054 #undef SPDK_CONFIG_RAID5F 00:56:46.054 #undef SPDK_CONFIG_RBD 00:56:46.054 #define SPDK_CONFIG_RDMA 1 00:56:46.054 #define SPDK_CONFIG_RDMA_PROV verbs 00:56:46.054 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:56:46.054 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:56:46.054 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:56:46.054 #define SPDK_CONFIG_SHARED 1 00:56:46.054 #undef SPDK_CONFIG_SMA 00:56:46.054 #define SPDK_CONFIG_TESTS 1 00:56:46.054 #undef SPDK_CONFIG_TSAN 00:56:46.054 #define SPDK_CONFIG_UBLK 1 00:56:46.054 #define SPDK_CONFIG_UBSAN 1 00:56:46.054 #undef SPDK_CONFIG_UNIT_TESTS 00:56:46.054 #undef SPDK_CONFIG_URING 00:56:46.054 #define SPDK_CONFIG_URING_PATH 00:56:46.054 #undef SPDK_CONFIG_URING_ZNS 00:56:46.054 #define SPDK_CONFIG_USDT 1 00:56:46.054 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:56:46.054 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:56:46.054 #undef SPDK_CONFIG_VFIO_USER 00:56:46.054 #define SPDK_CONFIG_VFIO_USER_DIR 00:56:46.054 #define SPDK_CONFIG_VHOST 1 00:56:46.054 #define SPDK_CONFIG_VIRTIO 1 00:56:46.054 #undef SPDK_CONFIG_VTUNE 00:56:46.054 #define SPDK_CONFIG_VTUNE_DIR 00:56:46.054 #define SPDK_CONFIG_WERROR 1 00:56:46.054 #define SPDK_CONFIG_WPDK_DIR 00:56:46.054 #undef SPDK_CONFIG_XNVME 00:56:46.054 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:56:46.054 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:56:46.055 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:56:46.056 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 70888 ]] 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 70888 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.OirdNU 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.OirdNU/tests/target /tmp/spdk.OirdNU 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:56:46.057 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13980921856 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5588078592 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256394240 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13980921856 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5588078592 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266286080 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=98731470848 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=971309056 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:56:46.058 * Looking for test storage... 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13980921856 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:46.058 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:46.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:56:46.059 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:56:46.319 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:56:46.319 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:56:46.319 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:56:46.319 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:56:46.319 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:56:46.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:46.320 --rc genhtml_branch_coverage=1 00:56:46.320 --rc genhtml_function_coverage=1 00:56:46.320 --rc genhtml_legend=1 00:56:46.320 --rc geninfo_all_blocks=1 00:56:46.320 --rc geninfo_unexecuted_blocks=1 00:56:46.320 00:56:46.320 ' 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:56:46.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:46.320 --rc genhtml_branch_coverage=1 00:56:46.320 --rc genhtml_function_coverage=1 00:56:46.320 --rc genhtml_legend=1 00:56:46.320 --rc geninfo_all_blocks=1 00:56:46.320 --rc geninfo_unexecuted_blocks=1 00:56:46.320 00:56:46.320 ' 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:56:46.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:46.320 --rc genhtml_branch_coverage=1 00:56:46.320 --rc genhtml_function_coverage=1 00:56:46.320 --rc genhtml_legend=1 00:56:46.320 --rc geninfo_all_blocks=1 00:56:46.320 --rc geninfo_unexecuted_blocks=1 00:56:46.320 00:56:46.320 ' 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:56:46.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:56:46.320 --rc genhtml_branch_coverage=1 00:56:46.320 --rc genhtml_function_coverage=1 00:56:46.320 --rc genhtml_legend=1 00:56:46.320 --rc geninfo_all_blocks=1 00:56:46.320 --rc geninfo_unexecuted_blocks=1 00:56:46.320 00:56:46.320 ' 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:56:46.320 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:56:46.321 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:56:46.321 Cannot find device "nvmf_init_br" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:56:46.321 Cannot find device "nvmf_init_br2" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:56:46.321 Cannot find device "nvmf_tgt_br" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:56:46.321 Cannot find device "nvmf_tgt_br2" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:56:46.321 Cannot find device "nvmf_init_br" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:56:46.321 Cannot find device "nvmf_init_br2" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:56:46.321 Cannot find device "nvmf_tgt_br" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:56:46.321 Cannot find device "nvmf_tgt_br2" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:56:46.321 Cannot find device "nvmf_br" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:56:46.321 Cannot find device "nvmf_init_if" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:56:46.321 Cannot find device "nvmf_init_if2" 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:56:46.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:56:46.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:56:46.321 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:56:46.322 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:56:46.322 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:56:46.322 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:56:46.322 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:56:46.322 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:56:46.322 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:56:46.322 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:56:46.582 05:55:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:56:46.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:56:46.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:56:46.582 00:56:46.582 --- 10.0.0.3 ping statistics --- 00:56:46.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:46.582 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:56:46.582 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:56:46.582 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:56:46.582 00:56:46.582 --- 10.0.0.4 ping statistics --- 00:56:46.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:46.582 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:56:46.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:46.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:56:46.582 00:56:46.582 --- 10.0.0.1 ping statistics --- 00:56:46.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:46.582 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:56:46.582 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:56:46.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:46.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:56:46.582 00:56:46.582 --- 10.0.0.2 ping statistics --- 00:56:46.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:46.583 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:56:46.583 ************************************ 00:56:46.583 START TEST nvmf_filesystem_no_in_capsule 00:56:46.583 ************************************ 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71078 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71078 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71078 ']' 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:46.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:56:46.583 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:46.583 [2024-12-09 05:55:41.152689] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:56:46.583 [2024-12-09 05:55:41.152745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:46.842 [2024-12-09 05:55:41.301999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:56:46.842 [2024-12-09 05:55:41.342789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:46.842 [2024-12-09 05:55:41.343105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:46.842 [2024-12-09 05:55:41.343314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:46.842 [2024-12-09 05:55:41.343599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:46.842 [2024-12-09 05:55:41.343796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:46.842 [2024-12-09 05:55:41.344777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:56:46.842 [2024-12-09 05:55:41.344857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:56:46.842 [2024-12-09 05:55:41.344997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:56:46.842 [2024-12-09 05:55:41.345004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:47.101 [2024-12-09 05:55:41.493063] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:47.101 Malloc1 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:47.101 [2024-12-09 05:55:41.614549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:47.101 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:56:47.101 { 00:56:47.101 "aliases": [ 00:56:47.101 "98536d1b-df67-47c1-a170-f398f4c5d441" 00:56:47.101 ], 00:56:47.101 "assigned_rate_limits": { 00:56:47.101 "r_mbytes_per_sec": 0, 00:56:47.101 "rw_ios_per_sec": 0, 00:56:47.101 "rw_mbytes_per_sec": 0, 00:56:47.101 "w_mbytes_per_sec": 0 00:56:47.101 }, 00:56:47.101 "block_size": 512, 00:56:47.101 "claim_type": "exclusive_write", 00:56:47.101 "claimed": true, 00:56:47.101 "driver_specific": {}, 00:56:47.101 "memory_domains": [ 00:56:47.101 { 00:56:47.101 "dma_device_id": "system", 00:56:47.101 "dma_device_type": 1 00:56:47.101 }, 00:56:47.101 { 00:56:47.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:56:47.101 "dma_device_type": 2 00:56:47.101 } 00:56:47.101 ], 00:56:47.101 "name": "Malloc1", 00:56:47.101 "num_blocks": 1048576, 00:56:47.101 "product_name": "Malloc disk", 00:56:47.101 "supported_io_types": { 00:56:47.101 "abort": true, 00:56:47.101 "compare": false, 00:56:47.101 "compare_and_write": false, 00:56:47.101 "copy": true, 00:56:47.101 "flush": true, 00:56:47.101 "get_zone_info": false, 00:56:47.101 "nvme_admin": false, 00:56:47.101 "nvme_io": false, 00:56:47.101 "nvme_io_md": false, 00:56:47.101 "nvme_iov_md": false, 00:56:47.101 "read": true, 00:56:47.101 "reset": true, 00:56:47.101 "seek_data": false, 00:56:47.102 "seek_hole": false, 00:56:47.102 "unmap": true, 00:56:47.102 "write": true, 00:56:47.102 "write_zeroes": true, 00:56:47.102 "zcopy": true, 00:56:47.102 "zone_append": false, 00:56:47.102 "zone_management": false 00:56:47.102 }, 00:56:47.102 "uuid": "98536d1b-df67-47c1-a170-f398f4c5d441", 00:56:47.102 "zoned": false 00:56:47.102 } 00:56:47.102 ]' 00:56:47.102 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:56:47.360 05:55:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:56:49.889 05:55:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:56:49.889 05:55:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:56:49.889 05:55:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:50.823 ************************************ 00:56:50.823 START TEST filesystem_ext4 00:56:50.823 ************************************ 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:56:50.823 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:56:50.824 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:56:50.824 mke2fs 1.47.0 (5-Feb-2023) 00:56:50.824 Discarding device blocks: 0/522240 done 00:56:50.824 Creating filesystem with 522240 1k blocks and 130560 inodes 00:56:50.824 Filesystem UUID: decbfd1c-6139-4735-be0d-18c90d32680d 00:56:50.824 Superblock backups stored on blocks: 00:56:50.824 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:56:50.824 00:56:50.824 Allocating group tables: 0/64 done 00:56:50.824 Writing inode tables: 0/64 done 00:56:50.824 Creating journal (8192 blocks): done 00:56:50.824 Writing superblocks and filesystem accounting information: 0/64 done 00:56:50.824 00:56:50.824 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:56:50.824 05:55:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71078 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:56:56.086 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:56:56.345 00:56:56.345 real 0m5.520s 00:56:56.345 user 0m0.017s 00:56:56.345 sys 0m0.060s 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:56:56.345 ************************************ 00:56:56.345 END TEST filesystem_ext4 00:56:56.345 ************************************ 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:56.345 ************************************ 00:56:56.345 START TEST filesystem_btrfs 00:56:56.345 ************************************ 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:56:56.345 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:56:56.346 btrfs-progs v6.8.1 00:56:56.346 See https://btrfs.readthedocs.io for more information. 00:56:56.346 00:56:56.346 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:56:56.346 NOTE: several default settings have changed in version 5.15, please make sure 00:56:56.346 this does not affect your deployments: 00:56:56.346 - DUP for metadata (-m dup) 00:56:56.346 - enabled no-holes (-O no-holes) 00:56:56.346 - enabled free-space-tree (-R free-space-tree) 00:56:56.346 00:56:56.346 Label: (null) 00:56:56.346 UUID: 998a3176-6a49-4513-8be7-7e5fd3b2f182 00:56:56.346 Node size: 16384 00:56:56.346 Sector size: 4096 (CPU page size: 4096) 00:56:56.346 Filesystem size: 510.00MiB 00:56:56.346 Block group profiles: 00:56:56.346 Data: single 8.00MiB 00:56:56.346 Metadata: DUP 32.00MiB 00:56:56.346 System: DUP 8.00MiB 00:56:56.346 SSD detected: yes 00:56:56.346 Zoned device: no 00:56:56.346 Features: extref, skinny-metadata, no-holes, free-space-tree 00:56:56.346 Checksum: crc32c 00:56:56.346 Number of devices: 1 00:56:56.346 Devices: 00:56:56.346 ID SIZE PATH 00:56:56.346 1 510.00MiB /dev/nvme0n1p1 00:56:56.346 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71078 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:56:56.346 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:56:56.621 00:56:56.621 real 0m0.217s 00:56:56.621 user 0m0.022s 00:56:56.621 sys 0m0.059s 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:56:56.621 ************************************ 00:56:56.621 END TEST filesystem_btrfs 00:56:56.621 ************************************ 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:56.621 ************************************ 00:56:56.621 START TEST filesystem_xfs 00:56:56.621 ************************************ 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:56:56.621 05:55:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:56:56.621 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:56:56.621 = sectsz=512 attr=2, projid32bit=1 00:56:56.621 = crc=1 finobt=1, sparse=1, rmapbt=0 00:56:56.621 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:56:56.621 data = bsize=4096 blocks=130560, imaxpct=25 00:56:56.621 = sunit=0 swidth=0 blks 00:56:56.621 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:56:56.621 log =internal log bsize=4096 blocks=16384, version=2 00:56:56.621 = sectsz=512 sunit=0 blks, lazy-count=1 00:56:56.621 realtime =none extsz=4096 blocks=0, rtextents=0 00:56:57.194 Discarding blocks...Done. 00:56:57.194 05:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:56:57.194 05:55:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71078 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:56:59.769 00:56:59.769 real 0m3.089s 00:56:59.769 user 0m0.022s 00:56:59.769 sys 0m0.058s 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:56:59.769 ************************************ 00:56:59.769 END TEST filesystem_xfs 00:56:59.769 ************************************ 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:56:59.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71078 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71078 ']' 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71078 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71078 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:56:59.769 killing process with pid 71078 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71078' 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71078 00:56:59.769 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71078 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:57:00.029 00:57:00.029 real 0m13.381s 00:57:00.029 user 0m51.148s 00:57:00.029 sys 0m1.988s 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:00.029 ************************************ 00:57:00.029 END TEST nvmf_filesystem_no_in_capsule 00:57:00.029 ************************************ 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:57:00.029 ************************************ 00:57:00.029 START TEST nvmf_filesystem_in_capsule 00:57:00.029 ************************************ 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71426 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71426 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71426 ']' 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:00.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:00.029 05:55:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:00.288 [2024-12-09 05:55:54.629539] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:57:00.288 [2024-12-09 05:55:54.629725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:00.288 [2024-12-09 05:55:54.778347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:00.288 [2024-12-09 05:55:54.810032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:00.288 [2024-12-09 05:55:54.810079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:00.288 [2024-12-09 05:55:54.810105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:00.288 [2024-12-09 05:55:54.810112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:00.288 [2024-12-09 05:55:54.810119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:00.288 [2024-12-09 05:55:54.810935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:00.288 [2024-12-09 05:55:54.811033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:00.288 [2024-12-09 05:55:54.811113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:00.288 [2024-12-09 05:55:54.811115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:01.223 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:01.223 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:57:01.223 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:01.223 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:01.223 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:01.223 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:01.224 [2024-12-09 05:55:55.653774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:01.224 Malloc1 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:01.224 [2024-12-09 05:55:55.760907] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:57:01.224 { 00:57:01.224 "aliases": [ 00:57:01.224 "e1a09f5d-fa35-4291-97d8-22fe4a6a5049" 00:57:01.224 ], 00:57:01.224 "assigned_rate_limits": { 00:57:01.224 "r_mbytes_per_sec": 0, 00:57:01.224 "rw_ios_per_sec": 0, 00:57:01.224 "rw_mbytes_per_sec": 0, 00:57:01.224 "w_mbytes_per_sec": 0 00:57:01.224 }, 00:57:01.224 "block_size": 512, 00:57:01.224 "claim_type": "exclusive_write", 00:57:01.224 "claimed": true, 00:57:01.224 "driver_specific": {}, 00:57:01.224 "memory_domains": [ 00:57:01.224 { 00:57:01.224 "dma_device_id": "system", 00:57:01.224 "dma_device_type": 1 00:57:01.224 }, 00:57:01.224 { 00:57:01.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:57:01.224 "dma_device_type": 2 00:57:01.224 } 00:57:01.224 ], 00:57:01.224 "name": "Malloc1", 00:57:01.224 "num_blocks": 1048576, 00:57:01.224 "product_name": "Malloc disk", 00:57:01.224 "supported_io_types": { 00:57:01.224 "abort": true, 00:57:01.224 "compare": false, 00:57:01.224 "compare_and_write": false, 00:57:01.224 "copy": true, 00:57:01.224 "flush": true, 00:57:01.224 "get_zone_info": false, 00:57:01.224 "nvme_admin": false, 00:57:01.224 "nvme_io": false, 00:57:01.224 "nvme_io_md": false, 00:57:01.224 "nvme_iov_md": false, 00:57:01.224 "read": true, 00:57:01.224 "reset": true, 00:57:01.224 "seek_data": false, 00:57:01.224 "seek_hole": false, 00:57:01.224 "unmap": true, 00:57:01.224 "write": true, 00:57:01.224 "write_zeroes": true, 00:57:01.224 "zcopy": true, 00:57:01.224 "zone_append": false, 00:57:01.224 "zone_management": false 00:57:01.224 }, 00:57:01.224 "uuid": "e1a09f5d-fa35-4291-97d8-22fe4a6a5049", 00:57:01.224 "zoned": false 00:57:01.224 } 00:57:01.224 ]' 00:57:01.224 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:57:01.482 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:57:01.482 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:57:01.482 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:57:01.482 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:57:01.482 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:57:01.482 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:57:01.482 05:55:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:01.482 05:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:57:01.482 05:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:57:01.482 05:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:01.482 05:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:01.482 05:55:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:57:04.012 05:55:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:04.946 ************************************ 00:57:04.946 START TEST filesystem_in_capsule_ext4 00:57:04.946 ************************************ 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:57:04.946 mke2fs 1.47.0 (5-Feb-2023) 00:57:04.946 Discarding device blocks: 0/522240 done 00:57:04.946 Creating filesystem with 522240 1k blocks and 130560 inodes 00:57:04.946 Filesystem UUID: ded5282a-f0a4-46d5-912c-362a0a7dbcdf 00:57:04.946 Superblock backups stored on blocks: 00:57:04.946 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:57:04.946 00:57:04.946 Allocating group tables: 0/64 done 00:57:04.946 Writing inode tables: 0/64 done 00:57:04.946 Creating journal (8192 blocks): done 00:57:04.946 Writing superblocks and filesystem accounting information: 0/64 done 00:57:04.946 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:57:04.946 05:55:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71426 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:57:10.213 00:57:10.213 real 0m5.544s 00:57:10.213 user 0m0.022s 00:57:10.213 sys 0m0.067s 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:57:10.213 ************************************ 00:57:10.213 END TEST filesystem_in_capsule_ext4 00:57:10.213 ************************************ 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:10.213 ************************************ 00:57:10.213 START TEST filesystem_in_capsule_btrfs 00:57:10.213 ************************************ 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:57:10.213 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:57:10.472 btrfs-progs v6.8.1 00:57:10.472 See https://btrfs.readthedocs.io for more information. 00:57:10.472 00:57:10.472 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:57:10.472 NOTE: several default settings have changed in version 5.15, please make sure 00:57:10.472 this does not affect your deployments: 00:57:10.472 - DUP for metadata (-m dup) 00:57:10.472 - enabled no-holes (-O no-holes) 00:57:10.472 - enabled free-space-tree (-R free-space-tree) 00:57:10.472 00:57:10.472 Label: (null) 00:57:10.472 UUID: 30bbce13-e5fd-4bc9-ab93-7361fa60862c 00:57:10.472 Node size: 16384 00:57:10.472 Sector size: 4096 (CPU page size: 4096) 00:57:10.472 Filesystem size: 510.00MiB 00:57:10.472 Block group profiles: 00:57:10.472 Data: single 8.00MiB 00:57:10.472 Metadata: DUP 32.00MiB 00:57:10.472 System: DUP 8.00MiB 00:57:10.472 SSD detected: yes 00:57:10.472 Zoned device: no 00:57:10.472 Features: extref, skinny-metadata, no-holes, free-space-tree 00:57:10.472 Checksum: crc32c 00:57:10.472 Number of devices: 1 00:57:10.472 Devices: 00:57:10.472 ID SIZE PATH 00:57:10.472 1 510.00MiB /dev/nvme0n1p1 00:57:10.472 00:57:10.472 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:57:10.472 05:56:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:57:10.472 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:57:10.472 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71426 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:57:10.731 00:57:10.731 real 0m0.309s 00:57:10.731 user 0m0.021s 00:57:10.731 sys 0m0.075s 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:57:10.731 ************************************ 00:57:10.731 END TEST filesystem_in_capsule_btrfs 00:57:10.731 ************************************ 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:10.731 ************************************ 00:57:10.731 START TEST filesystem_in_capsule_xfs 00:57:10.731 ************************************ 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:57:10.731 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:57:10.731 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:57:10.731 = sectsz=512 attr=2, projid32bit=1 00:57:10.731 = crc=1 finobt=1, sparse=1, rmapbt=0 00:57:10.731 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:57:10.731 data = bsize=4096 blocks=130560, imaxpct=25 00:57:10.731 = sunit=0 swidth=0 blks 00:57:10.731 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:57:10.731 log =internal log bsize=4096 blocks=16384, version=2 00:57:10.731 = sectsz=512 sunit=0 blks, lazy-count=1 00:57:10.731 realtime =none extsz=4096 blocks=0, rtextents=0 00:57:11.298 Discarding blocks...Done. 00:57:11.298 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:57:11.298 05:56:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71426 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:57:13.200 00:57:13.200 real 0m2.574s 00:57:13.200 user 0m0.022s 00:57:13.200 sys 0m0.056s 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:57:13.200 ************************************ 00:57:13.200 END TEST filesystem_in_capsule_xfs 00:57:13.200 ************************************ 00:57:13.200 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:13.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71426 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71426 ']' 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71426 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71426 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:13.459 killing process with pid 71426 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71426' 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 71426 00:57:13.459 05:56:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 71426 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:57:13.718 00:57:13.718 real 0m13.596s 00:57:13.718 user 0m52.214s 00:57:13.718 sys 0m1.915s 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:57:13.718 ************************************ 00:57:13.718 END TEST nvmf_filesystem_in_capsule 00:57:13.718 ************************************ 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:57:13.718 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:13.719 rmmod nvme_tcp 00:57:13.719 rmmod nvme_fabrics 00:57:13.719 rmmod nvme_keyring 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:57:13.719 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:57:13.978 00:57:13.978 real 0m28.289s 00:57:13.978 user 1m43.769s 00:57:13.978 sys 0m4.451s 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:13.978 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:57:13.978 ************************************ 00:57:13.978 END TEST nvmf_filesystem 00:57:13.978 ************************************ 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:57:14.238 ************************************ 00:57:14.238 START TEST nvmf_target_discovery 00:57:14.238 ************************************ 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:57:14.238 * Looking for test storage... 00:57:14.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:14.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:14.238 --rc genhtml_branch_coverage=1 00:57:14.238 --rc genhtml_function_coverage=1 00:57:14.238 --rc genhtml_legend=1 00:57:14.238 --rc geninfo_all_blocks=1 00:57:14.238 --rc geninfo_unexecuted_blocks=1 00:57:14.238 00:57:14.238 ' 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:14.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:14.238 --rc genhtml_branch_coverage=1 00:57:14.238 --rc genhtml_function_coverage=1 00:57:14.238 --rc genhtml_legend=1 00:57:14.238 --rc geninfo_all_blocks=1 00:57:14.238 --rc geninfo_unexecuted_blocks=1 00:57:14.238 00:57:14.238 ' 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:14.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:14.238 --rc genhtml_branch_coverage=1 00:57:14.238 --rc genhtml_function_coverage=1 00:57:14.238 --rc genhtml_legend=1 00:57:14.238 --rc geninfo_all_blocks=1 00:57:14.238 --rc geninfo_unexecuted_blocks=1 00:57:14.238 00:57:14.238 ' 00:57:14.238 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:14.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:14.238 --rc genhtml_branch_coverage=1 00:57:14.238 --rc genhtml_function_coverage=1 00:57:14.238 --rc genhtml_legend=1 00:57:14.238 --rc geninfo_all_blocks=1 00:57:14.238 --rc geninfo_unexecuted_blocks=1 00:57:14.238 00:57:14.238 ' 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:14.239 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:57:14.239 Cannot find device "nvmf_init_br" 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:57:14.239 Cannot find device "nvmf_init_br2" 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:57:14.239 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:57:14.498 Cannot find device "nvmf_tgt_br" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:57:14.499 Cannot find device "nvmf_tgt_br2" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:57:14.499 Cannot find device "nvmf_init_br" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:57:14.499 Cannot find device "nvmf_init_br2" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:57:14.499 Cannot find device "nvmf_tgt_br" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:57:14.499 Cannot find device "nvmf_tgt_br2" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:57:14.499 Cannot find device "nvmf_br" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:57:14.499 Cannot find device "nvmf_init_if" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:57:14.499 Cannot find device "nvmf_init_if2" 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:14.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:14.499 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:14.499 05:56:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:57:14.499 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:57:14.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:14.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:57:14.757 00:57:14.757 --- 10.0.0.3 ping statistics --- 00:57:14.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:14.757 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:57:14.757 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:57:14.757 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:57:14.757 00:57:14.757 --- 10.0.0.4 ping statistics --- 00:57:14.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:14.757 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:14.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:14.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:57:14.757 00:57:14.757 --- 10.0.0.1 ping statistics --- 00:57:14.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:14.757 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:57:14.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:14.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:57:14.757 00:57:14.757 --- 10.0.0.2 ping statistics --- 00:57:14.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:14.757 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72008 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72008 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72008 ']' 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:14.757 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:14.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:14.758 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:14.758 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:14.758 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:14.758 [2024-12-09 05:56:09.266850] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:57:14.758 [2024-12-09 05:56:09.266943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:15.016 [2024-12-09 05:56:09.414370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:15.016 [2024-12-09 05:56:09.442908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:15.016 [2024-12-09 05:56:09.442987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:15.016 [2024-12-09 05:56:09.442997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:15.016 [2024-12-09 05:56:09.443004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:15.016 [2024-12-09 05:56:09.443010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:15.016 [2024-12-09 05:56:09.443824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:15.016 [2024-12-09 05:56:09.444490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:15.016 [2024-12-09 05:56:09.444689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:15.016 [2024-12-09 05:56:09.444690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.016 [2024-12-09 05:56:09.576466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.016 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 Null1 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 [2024-12-09 05:56:09.620668] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 Null2 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 Null3 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.275 Null4 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:57:15.275 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 4420 00:57:15.276 00:57:15.276 Discovery Log Number of Records 6, Generation counter 6 00:57:15.276 =====Discovery Log Entry 0====== 00:57:15.276 trtype: tcp 00:57:15.276 adrfam: ipv4 00:57:15.276 subtype: current discovery subsystem 00:57:15.276 treq: not required 00:57:15.276 portid: 0 00:57:15.276 trsvcid: 4420 00:57:15.276 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:57:15.276 traddr: 10.0.0.3 00:57:15.276 eflags: explicit discovery connections, duplicate discovery information 00:57:15.276 sectype: none 00:57:15.276 =====Discovery Log Entry 1====== 00:57:15.276 trtype: tcp 00:57:15.276 adrfam: ipv4 00:57:15.276 subtype: nvme subsystem 00:57:15.276 treq: not required 00:57:15.276 portid: 0 00:57:15.276 trsvcid: 4420 00:57:15.276 subnqn: nqn.2016-06.io.spdk:cnode1 00:57:15.276 traddr: 10.0.0.3 00:57:15.276 eflags: none 00:57:15.276 sectype: none 00:57:15.276 =====Discovery Log Entry 2====== 00:57:15.276 trtype: tcp 00:57:15.276 adrfam: ipv4 00:57:15.276 subtype: nvme subsystem 00:57:15.276 treq: not required 00:57:15.276 portid: 0 00:57:15.276 trsvcid: 4420 00:57:15.276 subnqn: nqn.2016-06.io.spdk:cnode2 00:57:15.276 traddr: 10.0.0.3 00:57:15.276 eflags: none 00:57:15.276 sectype: none 00:57:15.276 =====Discovery Log Entry 3====== 00:57:15.276 trtype: tcp 00:57:15.276 adrfam: ipv4 00:57:15.276 subtype: nvme subsystem 00:57:15.276 treq: not required 00:57:15.276 portid: 0 00:57:15.276 trsvcid: 4420 00:57:15.276 subnqn: nqn.2016-06.io.spdk:cnode3 00:57:15.276 traddr: 10.0.0.3 00:57:15.276 eflags: none 00:57:15.276 sectype: none 00:57:15.276 =====Discovery Log Entry 4====== 00:57:15.276 trtype: tcp 00:57:15.276 adrfam: ipv4 00:57:15.276 subtype: nvme subsystem 00:57:15.276 treq: not required 00:57:15.276 portid: 0 00:57:15.276 trsvcid: 4420 00:57:15.276 subnqn: nqn.2016-06.io.spdk:cnode4 00:57:15.276 traddr: 10.0.0.3 00:57:15.276 eflags: none 00:57:15.276 sectype: none 00:57:15.276 =====Discovery Log Entry 5====== 00:57:15.276 trtype: tcp 00:57:15.276 adrfam: ipv4 00:57:15.276 subtype: discovery subsystem referral 00:57:15.276 treq: not required 00:57:15.276 portid: 0 00:57:15.276 trsvcid: 4430 00:57:15.276 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:57:15.276 traddr: 10.0.0.3 00:57:15.276 eflags: none 00:57:15.276 sectype: none 00:57:15.276 Perform nvmf subsystem discovery via RPC 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.276 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.534 [ 00:57:15.534 { 00:57:15.534 "allow_any_host": true, 00:57:15.534 "hosts": [], 00:57:15.534 "listen_addresses": [ 00:57:15.534 { 00:57:15.534 "adrfam": "IPv4", 00:57:15.534 "traddr": "10.0.0.3", 00:57:15.534 "trsvcid": "4420", 00:57:15.534 "trtype": "TCP" 00:57:15.534 } 00:57:15.534 ], 00:57:15.534 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:57:15.534 "subtype": "Discovery" 00:57:15.534 }, 00:57:15.534 { 00:57:15.534 "allow_any_host": true, 00:57:15.534 "hosts": [], 00:57:15.534 "listen_addresses": [ 00:57:15.534 { 00:57:15.534 "adrfam": "IPv4", 00:57:15.534 "traddr": "10.0.0.3", 00:57:15.534 "trsvcid": "4420", 00:57:15.534 "trtype": "TCP" 00:57:15.534 } 00:57:15.534 ], 00:57:15.534 "max_cntlid": 65519, 00:57:15.534 "max_namespaces": 32, 00:57:15.534 "min_cntlid": 1, 00:57:15.534 "model_number": "SPDK bdev Controller", 00:57:15.534 "namespaces": [ 00:57:15.534 { 00:57:15.534 "bdev_name": "Null1", 00:57:15.534 "name": "Null1", 00:57:15.534 "nguid": "A19FF5B7D6A44928A35686D1A875F045", 00:57:15.534 "nsid": 1, 00:57:15.534 "uuid": "a19ff5b7-d6a4-4928-a356-86d1a875f045" 00:57:15.534 } 00:57:15.534 ], 00:57:15.534 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:57:15.534 "serial_number": "SPDK00000000000001", 00:57:15.534 "subtype": "NVMe" 00:57:15.534 }, 00:57:15.534 { 00:57:15.534 "allow_any_host": true, 00:57:15.534 "hosts": [], 00:57:15.534 "listen_addresses": [ 00:57:15.534 { 00:57:15.534 "adrfam": "IPv4", 00:57:15.534 "traddr": "10.0.0.3", 00:57:15.534 "trsvcid": "4420", 00:57:15.534 "trtype": "TCP" 00:57:15.534 } 00:57:15.534 ], 00:57:15.534 "max_cntlid": 65519, 00:57:15.534 "max_namespaces": 32, 00:57:15.534 "min_cntlid": 1, 00:57:15.534 "model_number": "SPDK bdev Controller", 00:57:15.534 "namespaces": [ 00:57:15.534 { 00:57:15.534 "bdev_name": "Null2", 00:57:15.534 "name": "Null2", 00:57:15.534 "nguid": "E739B188545047CC9DA0D35E1FF4D643", 00:57:15.534 "nsid": 1, 00:57:15.534 "uuid": "e739b188-5450-47cc-9da0-d35e1ff4d643" 00:57:15.534 } 00:57:15.534 ], 00:57:15.534 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:57:15.534 "serial_number": "SPDK00000000000002", 00:57:15.534 "subtype": "NVMe" 00:57:15.534 }, 00:57:15.534 { 00:57:15.534 "allow_any_host": true, 00:57:15.534 "hosts": [], 00:57:15.534 "listen_addresses": [ 00:57:15.534 { 00:57:15.534 "adrfam": "IPv4", 00:57:15.534 "traddr": "10.0.0.3", 00:57:15.534 "trsvcid": "4420", 00:57:15.534 "trtype": "TCP" 00:57:15.534 } 00:57:15.534 ], 00:57:15.534 "max_cntlid": 65519, 00:57:15.534 "max_namespaces": 32, 00:57:15.534 "min_cntlid": 1, 00:57:15.534 "model_number": "SPDK bdev Controller", 00:57:15.535 "namespaces": [ 00:57:15.535 { 00:57:15.535 "bdev_name": "Null3", 00:57:15.535 "name": "Null3", 00:57:15.535 "nguid": "D252E0F97BDA4C0B8E2FABC36EB452FF", 00:57:15.535 "nsid": 1, 00:57:15.535 "uuid": "d252e0f9-7bda-4c0b-8e2f-abc36eb452ff" 00:57:15.535 } 00:57:15.535 ], 00:57:15.535 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:57:15.535 "serial_number": "SPDK00000000000003", 00:57:15.535 "subtype": "NVMe" 00:57:15.535 }, 00:57:15.535 { 00:57:15.535 "allow_any_host": true, 00:57:15.535 "hosts": [], 00:57:15.535 "listen_addresses": [ 00:57:15.535 { 00:57:15.535 "adrfam": "IPv4", 00:57:15.535 "traddr": "10.0.0.3", 00:57:15.535 "trsvcid": "4420", 00:57:15.535 "trtype": "TCP" 00:57:15.535 } 00:57:15.535 ], 00:57:15.535 "max_cntlid": 65519, 00:57:15.535 "max_namespaces": 32, 00:57:15.535 "min_cntlid": 1, 00:57:15.535 "model_number": "SPDK bdev Controller", 00:57:15.535 "namespaces": [ 00:57:15.535 { 00:57:15.535 "bdev_name": "Null4", 00:57:15.535 "name": "Null4", 00:57:15.535 "nguid": "91ABBBA7B0864FB59D38EB12DB775C91", 00:57:15.535 "nsid": 1, 00:57:15.535 "uuid": "91abbba7-b086-4fb5-9d38-eb12db775c91" 00:57:15.535 } 00:57:15.535 ], 00:57:15.535 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:57:15.535 "serial_number": "SPDK00000000000004", 00:57:15.535 "subtype": "NVMe" 00:57:15.535 } 00:57:15.535 ] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:57:15.535 05:56:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:15.535 rmmod nvme_tcp 00:57:15.535 rmmod nvme_fabrics 00:57:15.535 rmmod nvme_keyring 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72008 ']' 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72008 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72008 ']' 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72008 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72008 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:15.535 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72008' 00:57:15.535 killing process with pid 72008 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72008 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72008 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:57:15.794 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:57:16.052 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:57:16.053 00:57:16.053 real 0m1.926s 00:57:16.053 user 0m3.560s 00:57:16.053 sys 0m0.670s 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:57:16.053 ************************************ 00:57:16.053 END TEST nvmf_target_discovery 00:57:16.053 ************************************ 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:57:16.053 ************************************ 00:57:16.053 START TEST nvmf_referrals 00:57:16.053 ************************************ 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:57:16.053 * Looking for test storage... 00:57:16.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:57:16.053 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:16.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:16.313 --rc genhtml_branch_coverage=1 00:57:16.313 --rc genhtml_function_coverage=1 00:57:16.313 --rc genhtml_legend=1 00:57:16.313 --rc geninfo_all_blocks=1 00:57:16.313 --rc geninfo_unexecuted_blocks=1 00:57:16.313 00:57:16.313 ' 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:16.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:16.313 --rc genhtml_branch_coverage=1 00:57:16.313 --rc genhtml_function_coverage=1 00:57:16.313 --rc genhtml_legend=1 00:57:16.313 --rc geninfo_all_blocks=1 00:57:16.313 --rc geninfo_unexecuted_blocks=1 00:57:16.313 00:57:16.313 ' 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:16.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:16.313 --rc genhtml_branch_coverage=1 00:57:16.313 --rc genhtml_function_coverage=1 00:57:16.313 --rc genhtml_legend=1 00:57:16.313 --rc geninfo_all_blocks=1 00:57:16.313 --rc geninfo_unexecuted_blocks=1 00:57:16.313 00:57:16.313 ' 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:16.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:16.313 --rc genhtml_branch_coverage=1 00:57:16.313 --rc genhtml_function_coverage=1 00:57:16.313 --rc genhtml_legend=1 00:57:16.313 --rc geninfo_all_blocks=1 00:57:16.313 --rc geninfo_unexecuted_blocks=1 00:57:16.313 00:57:16.313 ' 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:16.313 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:16.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:57:16.314 Cannot find device "nvmf_init_br" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:57:16.314 Cannot find device "nvmf_init_br2" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:57:16.314 Cannot find device "nvmf_tgt_br" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:57:16.314 Cannot find device "nvmf_tgt_br2" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:57:16.314 Cannot find device "nvmf_init_br" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:57:16.314 Cannot find device "nvmf_init_br2" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:57:16.314 Cannot find device "nvmf_tgt_br" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:57:16.314 Cannot find device "nvmf_tgt_br2" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:57:16.314 Cannot find device "nvmf_br" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:57:16.314 Cannot find device "nvmf_init_if" 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:57:16.314 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:57:16.572 Cannot find device "nvmf_init_if2" 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:16.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:16.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:16.572 05:56:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:16.572 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:16.572 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:16.572 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:57:16.572 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:57:16.572 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:57:16.572 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:57:16.572 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:57:16.572 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:57:16.573 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:57:16.573 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:16.573 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:57:16.573 00:57:16.573 --- 10.0.0.3 ping statistics --- 00:57:16.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:16.573 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:57:16.831 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:57:16.831 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:57:16.831 00:57:16.831 --- 10.0.0.4 ping statistics --- 00:57:16.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:16.831 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:16.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:16.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:57:16.831 00:57:16.831 --- 10.0.0.1 ping statistics --- 00:57:16.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:16.831 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:57:16.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:16.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:57:16.831 00:57:16.831 --- 10.0.0.2 ping statistics --- 00:57:16.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:16.831 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72273 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72273 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72273 ']' 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:16.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:16.831 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:16.831 [2024-12-09 05:56:11.241738] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:57:16.831 [2024-12-09 05:56:11.241803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:16.831 [2024-12-09 05:56:11.390355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:17.090 [2024-12-09 05:56:11.431282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:17.090 [2024-12-09 05:56:11.431343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:17.090 [2024-12-09 05:56:11.431358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:17.090 [2024-12-09 05:56:11.431368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:17.090 [2024-12-09 05:56:11.431377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:17.090 [2024-12-09 05:56:11.432347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:17.090 [2024-12-09 05:56:11.432432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:17.090 [2024-12-09 05:56:11.432535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:17.090 [2024-12-09 05:56:11.432539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.090 [2024-12-09 05:56:11.576840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:57:17.090 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.091 [2024-12-09 05:56:11.589052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:57:17.091 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:17.350 05:56:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:57:17.609 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:17.867 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:57:18.125 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:57:18.125 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:57:18.125 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:18.125 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:18.125 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:18.125 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:57:18.125 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:18.126 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -a 10.0.0.3 -s 8009 -o json 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:57:18.385 05:56:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:18.644 rmmod nvme_tcp 00:57:18.644 rmmod nvme_fabrics 00:57:18.644 rmmod nvme_keyring 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72273 ']' 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72273 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72273 ']' 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72273 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:18.644 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72273 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:18.903 killing process with pid 72273 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72273' 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72273 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72273 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:57:18.903 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:57:19.163 00:57:19.163 real 0m3.056s 00:57:19.163 user 0m8.815s 00:57:19.163 sys 0m0.927s 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:19.163 ************************************ 00:57:19.163 END TEST nvmf_referrals 00:57:19.163 ************************************ 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:57:19.163 ************************************ 00:57:19.163 START TEST nvmf_connect_disconnect 00:57:19.163 ************************************ 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:57:19.163 * Looking for test storage... 00:57:19.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:57:19.163 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:19.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:19.423 --rc genhtml_branch_coverage=1 00:57:19.423 --rc genhtml_function_coverage=1 00:57:19.423 --rc genhtml_legend=1 00:57:19.423 --rc geninfo_all_blocks=1 00:57:19.423 --rc geninfo_unexecuted_blocks=1 00:57:19.423 00:57:19.423 ' 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:19.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:19.423 --rc genhtml_branch_coverage=1 00:57:19.423 --rc genhtml_function_coverage=1 00:57:19.423 --rc genhtml_legend=1 00:57:19.423 --rc geninfo_all_blocks=1 00:57:19.423 --rc geninfo_unexecuted_blocks=1 00:57:19.423 00:57:19.423 ' 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:19.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:19.423 --rc genhtml_branch_coverage=1 00:57:19.423 --rc genhtml_function_coverage=1 00:57:19.423 --rc genhtml_legend=1 00:57:19.423 --rc geninfo_all_blocks=1 00:57:19.423 --rc geninfo_unexecuted_blocks=1 00:57:19.423 00:57:19.423 ' 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:19.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:19.423 --rc genhtml_branch_coverage=1 00:57:19.423 --rc genhtml_function_coverage=1 00:57:19.423 --rc genhtml_legend=1 00:57:19.423 --rc geninfo_all_blocks=1 00:57:19.423 --rc geninfo_unexecuted_blocks=1 00:57:19.423 00:57:19.423 ' 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:19.423 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:19.424 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:57:19.424 Cannot find device "nvmf_init_br" 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:57:19.424 Cannot find device "nvmf_init_br2" 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:57:19.424 Cannot find device "nvmf_tgt_br" 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:57:19.424 Cannot find device "nvmf_tgt_br2" 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:57:19.424 Cannot find device "nvmf_init_br" 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:57:19.424 Cannot find device "nvmf_init_br2" 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:57:19.424 Cannot find device "nvmf_tgt_br" 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:57:19.424 Cannot find device "nvmf_tgt_br2" 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:57:19.424 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:57:19.424 Cannot find device "nvmf_br" 00:57:19.425 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:57:19.425 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:57:19.425 Cannot find device "nvmf_init_if" 00:57:19.425 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:57:19.425 05:56:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:57:19.684 Cannot find device "nvmf_init_if2" 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:19.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:19.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:57:19.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:19.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:57:19.684 00:57:19.684 --- 10.0.0.3 ping statistics --- 00:57:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:19.684 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:57:19.684 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:57:19.684 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:57:19.684 00:57:19.684 --- 10.0.0.4 ping statistics --- 00:57:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:19.684 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:19.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:19.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:57:19.684 00:57:19.684 --- 10.0.0.1 ping statistics --- 00:57:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:19.684 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:57:19.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:19.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:57:19.684 00:57:19.684 --- 10.0.0.2 ping statistics --- 00:57:19.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:19.684 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:19.684 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:19.685 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:19.685 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:19.685 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:19.685 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:19.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=72612 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 72612 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 72612 ']' 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:19.944 05:56:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:19.944 [2024-12-09 05:56:14.341384] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:57:19.944 [2024-12-09 05:56:14.341711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:19.944 [2024-12-09 05:56:14.488719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:19.944 [2024-12-09 05:56:14.519812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:19.944 [2024-12-09 05:56:14.520098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:19.944 [2024-12-09 05:56:14.520420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:19.944 [2024-12-09 05:56:14.520583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:19.944 [2024-12-09 05:56:14.520594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:19.944 [2024-12-09 05:56:14.521617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:19.944 [2024-12-09 05:56:14.521755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:19.944 [2024-12-09 05:56:14.521802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:19.944 [2024-12-09 05:56:14.521942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:20.880 [2024-12-09 05:56:15.363636] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.880 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:20.881 [2024-12-09 05:56:15.429105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:57:20.881 05:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:57:23.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:25.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:27.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:30.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:32.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:32.320 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:57:32.320 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:57:32.320 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:32.320 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:57:32.320 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:32.320 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:57:32.320 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:32.320 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:32.320 rmmod nvme_tcp 00:57:32.320 rmmod nvme_fabrics 00:57:32.320 rmmod nvme_keyring 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 72612 ']' 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 72612 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 72612 ']' 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 72612 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72612 00:57:32.321 killing process with pid 72612 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72612' 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 72612 00:57:32.321 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 72612 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:57:32.579 05:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:32.579 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:57:32.839 ************************************ 00:57:32.839 END TEST nvmf_connect_disconnect 00:57:32.839 ************************************ 00:57:32.839 00:57:32.839 real 0m13.550s 00:57:32.839 user 0m48.980s 00:57:32.839 sys 0m1.908s 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:57:32.839 ************************************ 00:57:32.839 START TEST nvmf_multitarget 00:57:32.839 ************************************ 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:57:32.839 * Looking for test storage... 00:57:32.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:32.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:32.839 --rc genhtml_branch_coverage=1 00:57:32.839 --rc genhtml_function_coverage=1 00:57:32.839 --rc genhtml_legend=1 00:57:32.839 --rc geninfo_all_blocks=1 00:57:32.839 --rc geninfo_unexecuted_blocks=1 00:57:32.839 00:57:32.839 ' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:32.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:32.839 --rc genhtml_branch_coverage=1 00:57:32.839 --rc genhtml_function_coverage=1 00:57:32.839 --rc genhtml_legend=1 00:57:32.839 --rc geninfo_all_blocks=1 00:57:32.839 --rc geninfo_unexecuted_blocks=1 00:57:32.839 00:57:32.839 ' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:32.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:32.839 --rc genhtml_branch_coverage=1 00:57:32.839 --rc genhtml_function_coverage=1 00:57:32.839 --rc genhtml_legend=1 00:57:32.839 --rc geninfo_all_blocks=1 00:57:32.839 --rc geninfo_unexecuted_blocks=1 00:57:32.839 00:57:32.839 ' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:32.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:32.839 --rc genhtml_branch_coverage=1 00:57:32.839 --rc genhtml_function_coverage=1 00:57:32.839 --rc genhtml_legend=1 00:57:32.839 --rc geninfo_all_blocks=1 00:57:32.839 --rc geninfo_unexecuted_blocks=1 00:57:32.839 00:57:32.839 ' 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:32.839 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:33.099 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:57:33.099 Cannot find device "nvmf_init_br" 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:57:33.099 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:57:33.099 Cannot find device "nvmf_init_br2" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:57:33.100 Cannot find device "nvmf_tgt_br" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:57:33.100 Cannot find device "nvmf_tgt_br2" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:57:33.100 Cannot find device "nvmf_init_br" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:57:33.100 Cannot find device "nvmf_init_br2" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:57:33.100 Cannot find device "nvmf_tgt_br" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:57:33.100 Cannot find device "nvmf_tgt_br2" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:57:33.100 Cannot find device "nvmf_br" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:57:33.100 Cannot find device "nvmf_init_if" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:57:33.100 Cannot find device "nvmf_init_if2" 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:33.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:33.100 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:33.100 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:57:33.359 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:57:33.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:33.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:57:33.360 00:57:33.360 --- 10.0.0.3 ping statistics --- 00:57:33.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:33.360 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:57:33.360 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:57:33.360 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:57:33.360 00:57:33.360 --- 10.0.0.4 ping statistics --- 00:57:33.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:33.360 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:33.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:33.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:57:33.360 00:57:33.360 --- 10.0.0.1 ping statistics --- 00:57:33.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:33.360 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:57:33.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:33.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:57:33.360 00:57:33.360 --- 10.0.0.2 ping statistics --- 00:57:33.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:33.360 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73067 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73067 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73067 ']' 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:33.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:33.360 05:56:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:57:33.360 [2024-12-09 05:56:27.932868] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:57:33.360 [2024-12-09 05:56:27.932927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:33.619 [2024-12-09 05:56:28.075292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:33.619 [2024-12-09 05:56:28.106410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:33.619 [2024-12-09 05:56:28.106622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:33.619 [2024-12-09 05:56:28.106797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:33.619 [2024-12-09 05:56:28.106867] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:33.619 [2024-12-09 05:56:28.106955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:33.619 [2024-12-09 05:56:28.107820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:33.619 [2024-12-09 05:56:28.107907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:33.619 [2024-12-09 05:56:28.108266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:33.619 [2024-12-09 05:56:28.108270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:33.877 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:33.877 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:57:33.877 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:33.877 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:33.877 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:57:33.878 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:33.878 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:57:33.878 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:57:33.878 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:57:33.878 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:57:33.878 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:57:34.136 "nvmf_tgt_1" 00:57:34.136 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:57:34.136 "nvmf_tgt_2" 00:57:34.136 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:57:34.136 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:57:34.394 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:57:34.394 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:57:34.394 true 00:57:34.394 05:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:57:34.652 true 00:57:34.652 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:57:34.652 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:57:34.652 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:57:34.653 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:57:34.653 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:57:34.653 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:34.653 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:34.911 rmmod nvme_tcp 00:57:34.911 rmmod nvme_fabrics 00:57:34.911 rmmod nvme_keyring 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73067 ']' 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73067 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73067 ']' 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73067 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73067 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:34.911 killing process with pid 73067 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73067' 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73067 00:57:34.911 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73067 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:35.168 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:57:35.427 00:57:35.427 real 0m2.516s 00:57:35.427 user 0m6.938s 00:57:35.427 sys 0m0.705s 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:57:35.427 ************************************ 00:57:35.427 END TEST nvmf_multitarget 00:57:35.427 ************************************ 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:57:35.427 ************************************ 00:57:35.427 START TEST nvmf_rpc 00:57:35.427 ************************************ 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:57:35.427 * Looking for test storage... 00:57:35.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:35.427 05:56:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:57:35.427 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:57:35.427 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:35.427 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:35.427 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:57:35.427 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:35.427 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:35.427 --rc genhtml_branch_coverage=1 00:57:35.427 --rc genhtml_function_coverage=1 00:57:35.427 --rc genhtml_legend=1 00:57:35.427 --rc geninfo_all_blocks=1 00:57:35.427 --rc geninfo_unexecuted_blocks=1 00:57:35.427 00:57:35.427 ' 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:35.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:35.428 --rc genhtml_branch_coverage=1 00:57:35.428 --rc genhtml_function_coverage=1 00:57:35.428 --rc genhtml_legend=1 00:57:35.428 --rc geninfo_all_blocks=1 00:57:35.428 --rc geninfo_unexecuted_blocks=1 00:57:35.428 00:57:35.428 ' 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:35.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:35.428 --rc genhtml_branch_coverage=1 00:57:35.428 --rc genhtml_function_coverage=1 00:57:35.428 --rc genhtml_legend=1 00:57:35.428 --rc geninfo_all_blocks=1 00:57:35.428 --rc geninfo_unexecuted_blocks=1 00:57:35.428 00:57:35.428 ' 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:35.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:35.428 --rc genhtml_branch_coverage=1 00:57:35.428 --rc genhtml_function_coverage=1 00:57:35.428 --rc genhtml_legend=1 00:57:35.428 --rc geninfo_all_blocks=1 00:57:35.428 --rc geninfo_unexecuted_blocks=1 00:57:35.428 00:57:35.428 ' 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:35.428 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:35.688 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:57:35.688 Cannot find device "nvmf_init_br" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:57:35.688 Cannot find device "nvmf_init_br2" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:57:35.688 Cannot find device "nvmf_tgt_br" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:57:35.688 Cannot find device "nvmf_tgt_br2" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:57:35.688 Cannot find device "nvmf_init_br" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:57:35.688 Cannot find device "nvmf_init_br2" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:57:35.688 Cannot find device "nvmf_tgt_br" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:57:35.688 Cannot find device "nvmf_tgt_br2" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:57:35.688 Cannot find device "nvmf_br" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:57:35.688 Cannot find device "nvmf_init_if" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:57:35.688 Cannot find device "nvmf_init_if2" 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:35.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:35.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:35.688 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:35.947 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:35.947 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:57:35.947 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:57:35.947 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:57:35.947 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:57:35.947 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:57:35.947 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:57:35.948 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:35.948 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:57:35.948 00:57:35.948 --- 10.0.0.3 ping statistics --- 00:57:35.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:35.948 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:57:35.948 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:57:35.948 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:57:35.948 00:57:35.948 --- 10.0.0.4 ping statistics --- 00:57:35.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:35.948 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:35.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:35.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:57:35.948 00:57:35.948 --- 10.0.0.1 ping statistics --- 00:57:35.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:35.948 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:57:35.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:35.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:57:35.948 00:57:35.948 --- 10.0.0.2 ping statistics --- 00:57:35.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:35.948 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=73335 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 73335 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 73335 ']' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:35.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:35.948 05:56:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:35.948 [2024-12-09 05:56:30.520942] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:57:35.948 [2024-12-09 05:56:30.521058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:36.207 [2024-12-09 05:56:30.674715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:36.207 [2024-12-09 05:56:30.713365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:36.207 [2024-12-09 05:56:30.713418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:36.207 [2024-12-09 05:56:30.713432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:36.207 [2024-12-09 05:56:30.713442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:36.207 [2024-12-09 05:56:30.713451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:36.207 [2024-12-09 05:56:30.714359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:36.207 [2024-12-09 05:56:30.715100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:36.207 [2024-12-09 05:56:30.715299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:36.207 [2024-12-09 05:56:30.715307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:57:37.147 "poll_groups": [ 00:57:37.147 { 00:57:37.147 "admin_qpairs": 0, 00:57:37.147 "completed_nvme_io": 0, 00:57:37.147 "current_admin_qpairs": 0, 00:57:37.147 "current_io_qpairs": 0, 00:57:37.147 "io_qpairs": 0, 00:57:37.147 "name": "nvmf_tgt_poll_group_000", 00:57:37.147 "pending_bdev_io": 0, 00:57:37.147 "transports": [] 00:57:37.147 }, 00:57:37.147 { 00:57:37.147 "admin_qpairs": 0, 00:57:37.147 "completed_nvme_io": 0, 00:57:37.147 "current_admin_qpairs": 0, 00:57:37.147 "current_io_qpairs": 0, 00:57:37.147 "io_qpairs": 0, 00:57:37.147 "name": "nvmf_tgt_poll_group_001", 00:57:37.147 "pending_bdev_io": 0, 00:57:37.147 "transports": [] 00:57:37.147 }, 00:57:37.147 { 00:57:37.147 "admin_qpairs": 0, 00:57:37.147 "completed_nvme_io": 0, 00:57:37.147 "current_admin_qpairs": 0, 00:57:37.147 "current_io_qpairs": 0, 00:57:37.147 "io_qpairs": 0, 00:57:37.147 "name": "nvmf_tgt_poll_group_002", 00:57:37.147 "pending_bdev_io": 0, 00:57:37.147 "transports": [] 00:57:37.147 }, 00:57:37.147 { 00:57:37.147 "admin_qpairs": 0, 00:57:37.147 "completed_nvme_io": 0, 00:57:37.147 "current_admin_qpairs": 0, 00:57:37.147 "current_io_qpairs": 0, 00:57:37.147 "io_qpairs": 0, 00:57:37.147 "name": "nvmf_tgt_poll_group_003", 00:57:37.147 "pending_bdev_io": 0, 00:57:37.147 "transports": [] 00:57:37.147 } 00:57:37.147 ], 00:57:37.147 "tick_rate": 2200000000 00:57:37.147 }' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.147 [2024-12-09 05:56:31.645331] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:57:37.147 "poll_groups": [ 00:57:37.147 { 00:57:37.147 "admin_qpairs": 0, 00:57:37.147 "completed_nvme_io": 0, 00:57:37.147 "current_admin_qpairs": 0, 00:57:37.147 "current_io_qpairs": 0, 00:57:37.147 "io_qpairs": 0, 00:57:37.147 "name": "nvmf_tgt_poll_group_000", 00:57:37.147 "pending_bdev_io": 0, 00:57:37.147 "transports": [ 00:57:37.147 { 00:57:37.147 "trtype": "TCP" 00:57:37.147 } 00:57:37.147 ] 00:57:37.147 }, 00:57:37.147 { 00:57:37.147 "admin_qpairs": 0, 00:57:37.147 "completed_nvme_io": 0, 00:57:37.147 "current_admin_qpairs": 0, 00:57:37.147 "current_io_qpairs": 0, 00:57:37.147 "io_qpairs": 0, 00:57:37.147 "name": "nvmf_tgt_poll_group_001", 00:57:37.147 "pending_bdev_io": 0, 00:57:37.147 "transports": [ 00:57:37.147 { 00:57:37.147 "trtype": "TCP" 00:57:37.147 } 00:57:37.147 ] 00:57:37.147 }, 00:57:37.147 { 00:57:37.147 "admin_qpairs": 0, 00:57:37.147 "completed_nvme_io": 0, 00:57:37.147 "current_admin_qpairs": 0, 00:57:37.147 "current_io_qpairs": 0, 00:57:37.147 "io_qpairs": 0, 00:57:37.147 "name": "nvmf_tgt_poll_group_002", 00:57:37.147 "pending_bdev_io": 0, 00:57:37.147 "transports": [ 00:57:37.147 { 00:57:37.147 "trtype": "TCP" 00:57:37.147 } 00:57:37.147 ] 00:57:37.147 }, 00:57:37.147 { 00:57:37.147 "admin_qpairs": 0, 00:57:37.147 "completed_nvme_io": 0, 00:57:37.147 "current_admin_qpairs": 0, 00:57:37.147 "current_io_qpairs": 0, 00:57:37.147 "io_qpairs": 0, 00:57:37.147 "name": "nvmf_tgt_poll_group_003", 00:57:37.147 "pending_bdev_io": 0, 00:57:37.147 "transports": [ 00:57:37.147 { 00:57:37.147 "trtype": "TCP" 00:57:37.147 } 00:57:37.147 ] 00:57:37.147 } 00:57:37.147 ], 00:57:37.147 "tick_rate": 2200000000 00:57:37.147 }' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:57:37.147 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.406 Malloc1 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.406 [2024-12-09 05:56:31.844377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -a 10.0.0.3 -s 4420 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -a 10.0.0.3 -s 4420 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -a 10.0.0.3 -s 4420 00:57:37.406 [2024-12-09 05:56:31.872715] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2' 00:57:37.406 Failed to write to /dev/nvme-fabrics: Input/output error 00:57:37.406 could not add new controller: failed to write to nvme-fabrics device 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:37.406 05:56:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:37.664 05:56:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:57:37.664 05:56:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:57:37.664 05:56:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:37.664 05:56:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:37.664 05:56:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:39.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:39.563 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:39.821 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:39.821 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:39.821 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:57:39.821 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:39.821 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:57:39.821 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:39.821 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:57:39.821 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:39.822 [2024-12-09 05:56:34.173778] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2' 00:57:39.822 Failed to write to /dev/nvme-fabrics: Input/output error 00:57:39.822 could not add new controller: failed to write to nvme-fabrics device 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:39.822 05:56:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:42.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:42.352 [2024-12-09 05:56:36.578867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:42.352 05:56:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:44.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:44.255 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:44.514 [2024-12-09 05:56:38.885798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:44.514 05:56:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:44.514 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:57:44.514 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:57:44.514 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:44.514 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:44.514 05:56:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:47.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:57:47.047 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:47.048 [2024-12-09 05:56:41.293177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:47.048 05:56:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:48.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:48.982 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:49.239 [2024-12-09 05:56:43.600653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:49.239 05:56:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:51.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:51.767 [2024-12-09 05:56:45.912024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:51.767 05:56:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:57:51.767 05:56:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:57:51.767 05:56:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:57:51.767 05:56:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:57:51.767 05:56:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:57:51.767 05:56:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:57:53.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:53.671 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.672 [2024-12-09 05:56:48.235176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.672 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 [2024-12-09 05:56:48.283173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 [2024-12-09 05:56:48.331256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 [2024-12-09 05:56:48.379298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 [2024-12-09 05:56:48.427324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.931 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:57:53.932 "poll_groups": [ 00:57:53.932 { 00:57:53.932 "admin_qpairs": 2, 00:57:53.932 "completed_nvme_io": 66, 00:57:53.932 "current_admin_qpairs": 0, 00:57:53.932 "current_io_qpairs": 0, 00:57:53.932 "io_qpairs": 16, 00:57:53.932 "name": "nvmf_tgt_poll_group_000", 00:57:53.932 "pending_bdev_io": 0, 00:57:53.932 "transports": [ 00:57:53.932 { 00:57:53.932 "trtype": "TCP" 00:57:53.932 } 00:57:53.932 ] 00:57:53.932 }, 00:57:53.932 { 00:57:53.932 "admin_qpairs": 3, 00:57:53.932 "completed_nvme_io": 119, 00:57:53.932 "current_admin_qpairs": 0, 00:57:53.932 "current_io_qpairs": 0, 00:57:53.932 "io_qpairs": 17, 00:57:53.932 "name": "nvmf_tgt_poll_group_001", 00:57:53.932 "pending_bdev_io": 0, 00:57:53.932 "transports": [ 00:57:53.932 { 00:57:53.932 "trtype": "TCP" 00:57:53.932 } 00:57:53.932 ] 00:57:53.932 }, 00:57:53.932 { 00:57:53.932 "admin_qpairs": 1, 00:57:53.932 "completed_nvme_io": 166, 00:57:53.932 "current_admin_qpairs": 0, 00:57:53.932 "current_io_qpairs": 0, 00:57:53.932 "io_qpairs": 19, 00:57:53.932 "name": "nvmf_tgt_poll_group_002", 00:57:53.932 "pending_bdev_io": 0, 00:57:53.932 "transports": [ 00:57:53.932 { 00:57:53.932 "trtype": "TCP" 00:57:53.932 } 00:57:53.932 ] 00:57:53.932 }, 00:57:53.932 { 00:57:53.932 "admin_qpairs": 1, 00:57:53.932 "completed_nvme_io": 69, 00:57:53.932 "current_admin_qpairs": 0, 00:57:53.932 "current_io_qpairs": 0, 00:57:53.932 "io_qpairs": 18, 00:57:53.932 "name": "nvmf_tgt_poll_group_003", 00:57:53.932 "pending_bdev_io": 0, 00:57:53.932 "transports": [ 00:57:53.932 { 00:57:53.932 "trtype": "TCP" 00:57:53.932 } 00:57:53.932 ] 00:57:53.932 } 00:57:53.932 ], 00:57:53.932 "tick_rate": 2200000000 00:57:53.932 }' 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:57:53.932 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:54.191 rmmod nvme_tcp 00:57:54.191 rmmod nvme_fabrics 00:57:54.191 rmmod nvme_keyring 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 73335 ']' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 73335 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 73335 ']' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 73335 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73335 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:57:54.191 killing process with pid 73335 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73335' 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 73335 00:57:54.191 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 73335 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:57:54.450 05:56:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:57:54.450 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:57:54.710 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:54.710 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:57:54.711 00:57:54.711 real 0m19.296s 00:57:54.711 user 1m11.570s 00:57:54.711 sys 0m2.652s 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:57:54.711 ************************************ 00:57:54.711 END TEST nvmf_rpc 00:57:54.711 ************************************ 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:57:54.711 ************************************ 00:57:54.711 START TEST nvmf_invalid 00:57:54.711 ************************************ 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:57:54.711 * Looking for test storage... 00:57:54.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:57:54.711 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:57:54.971 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:57:54.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:54.971 --rc genhtml_branch_coverage=1 00:57:54.972 --rc genhtml_function_coverage=1 00:57:54.972 --rc genhtml_legend=1 00:57:54.972 --rc geninfo_all_blocks=1 00:57:54.972 --rc geninfo_unexecuted_blocks=1 00:57:54.972 00:57:54.972 ' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:57:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:54.972 --rc genhtml_branch_coverage=1 00:57:54.972 --rc genhtml_function_coverage=1 00:57:54.972 --rc genhtml_legend=1 00:57:54.972 --rc geninfo_all_blocks=1 00:57:54.972 --rc geninfo_unexecuted_blocks=1 00:57:54.972 00:57:54.972 ' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:57:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:54.972 --rc genhtml_branch_coverage=1 00:57:54.972 --rc genhtml_function_coverage=1 00:57:54.972 --rc genhtml_legend=1 00:57:54.972 --rc geninfo_all_blocks=1 00:57:54.972 --rc geninfo_unexecuted_blocks=1 00:57:54.972 00:57:54.972 ' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:57:54.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:57:54.972 --rc genhtml_branch_coverage=1 00:57:54.972 --rc genhtml_function_coverage=1 00:57:54.972 --rc genhtml_legend=1 00:57:54.972 --rc geninfo_all_blocks=1 00:57:54.972 --rc geninfo_unexecuted_blocks=1 00:57:54.972 00:57:54.972 ' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:57:54.972 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:57:54.972 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:57:54.973 Cannot find device "nvmf_init_br" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:57:54.973 Cannot find device "nvmf_init_br2" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:57:54.973 Cannot find device "nvmf_tgt_br" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:57:54.973 Cannot find device "nvmf_tgt_br2" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:57:54.973 Cannot find device "nvmf_init_br" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:57:54.973 Cannot find device "nvmf_init_br2" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:57:54.973 Cannot find device "nvmf_tgt_br" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:57:54.973 Cannot find device "nvmf_tgt_br2" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:57:54.973 Cannot find device "nvmf_br" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:57:54.973 Cannot find device "nvmf_init_if" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:57:54.973 Cannot find device "nvmf_init_if2" 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:57:54.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:57:54.973 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:57:54.973 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:57:55.244 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:57:55.245 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:57:55.245 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:57:55.245 00:57:55.245 --- 10.0.0.3 ping statistics --- 00:57:55.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:55.245 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:57:55.245 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:57:55.245 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:57:55.245 00:57:55.245 --- 10.0.0.4 ping statistics --- 00:57:55.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:55.245 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:57:55.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:55.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:57:55.245 00:57:55.245 --- 10.0.0.1 ping statistics --- 00:57:55.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:55.245 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:57:55.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:55.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:57:55.245 00:57:55.245 --- 10.0.0.2 ping statistics --- 00:57:55.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:55.245 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=73903 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 73903 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 73903 ']' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:57:55.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:57:55.245 05:56:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:57:55.245 [2024-12-09 05:56:49.823464] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:57:55.245 [2024-12-09 05:56:49.823878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:57:55.504 [2024-12-09 05:56:49.966158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:57:55.504 [2024-12-09 05:56:49.994718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:57:55.504 [2024-12-09 05:56:49.994775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:57:55.504 [2024-12-09 05:56:49.994802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:57:55.504 [2024-12-09 05:56:49.994809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:57:55.504 [2024-12-09 05:56:49.994815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:57:55.504 [2024-12-09 05:56:49.995548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:57:55.504 [2024-12-09 05:56:49.996279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:57:55.504 [2024-12-09 05:56:49.996436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:57:55.504 [2024-12-09 05:56:49.996443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:57:55.763 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:57:55.763 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:57:55.763 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:57:55.763 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:57:55.763 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:57:55.763 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:55.763 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:57:55.763 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25126 00:57:56.021 [2024-12-09 05:56:50.431433] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:57:56.021 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/09 05:56:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25126 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:57:56.021 request: 00:57:56.021 { 00:57:56.021 "method": "nvmf_create_subsystem", 00:57:56.021 "params": { 00:57:56.021 "nqn": "nqn.2016-06.io.spdk:cnode25126", 00:57:56.021 "tgt_name": "foobar" 00:57:56.021 } 00:57:56.021 } 00:57:56.021 Got JSON-RPC error response 00:57:56.021 GoRPCClient: error on JSON-RPC call' 00:57:56.021 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/09 05:56:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode25126 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:57:56.021 request: 00:57:56.021 { 00:57:56.021 "method": "nvmf_create_subsystem", 00:57:56.021 "params": { 00:57:56.021 "nqn": "nqn.2016-06.io.spdk:cnode25126", 00:57:56.021 "tgt_name": "foobar" 00:57:56.021 } 00:57:56.021 } 00:57:56.021 Got JSON-RPC error response 00:57:56.021 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:57:56.021 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:57:56.021 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17490 00:57:56.280 [2024-12-09 05:56:50.739754] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17490: invalid serial number 'SPDKISFASTANDAWESOME' 00:57:56.280 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/09 05:56:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17490 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:57:56.280 request: 00:57:56.280 { 00:57:56.280 "method": "nvmf_create_subsystem", 00:57:56.280 "params": { 00:57:56.280 "nqn": "nqn.2016-06.io.spdk:cnode17490", 00:57:56.280 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:57:56.280 } 00:57:56.280 } 00:57:56.280 Got JSON-RPC error response 00:57:56.280 GoRPCClient: error on JSON-RPC call' 00:57:56.280 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/09 05:56:50 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17490 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:57:56.280 request: 00:57:56.280 { 00:57:56.280 "method": "nvmf_create_subsystem", 00:57:56.280 "params": { 00:57:56.280 "nqn": "nqn.2016-06.io.spdk:cnode17490", 00:57:56.280 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:57:56.280 } 00:57:56.280 } 00:57:56.281 Got JSON-RPC error response 00:57:56.281 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:57:56.281 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:57:56.281 05:56:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2181 00:57:56.540 [2024-12-09 05:56:51.027919] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2181: invalid model number 'SPDK_Controller' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/09 05:56:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode2181], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:57:56.540 request: 00:57:56.540 { 00:57:56.540 "method": "nvmf_create_subsystem", 00:57:56.540 "params": { 00:57:56.540 "nqn": "nqn.2016-06.io.spdk:cnode2181", 00:57:56.540 "model_number": "SPDK_Controller\u001f" 00:57:56.540 } 00:57:56.540 } 00:57:56.540 Got JSON-RPC error response 00:57:56.540 GoRPCClient: error on JSON-RPC call' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/09 05:56:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode2181], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:57:56.540 request: 00:57:56.540 { 00:57:56.540 "method": "nvmf_create_subsystem", 00:57:56.540 "params": { 00:57:56.540 "nqn": "nqn.2016-06.io.spdk:cnode2181", 00:57:56.540 "model_number": "SPDK_Controller\u001f" 00:57:56.540 } 00:57:56.540 } 00:57:56.540 Got JSON-RPC error response 00:57:56.540 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:57:56.540 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:57:56.541 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'I~'\''VF%:Eh-uy]@6Q i8Ur' 00:57:56.800 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'I~'\''VF%:Eh-uy]@6Q i8Ur' nqn.2016-06.io.spdk:cnode987 00:57:57.059 [2024-12-09 05:56:51.460272] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode987: invalid serial number 'I~'VF%:Eh-uy]@6Q i8Ur' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/09 05:56:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode987 serial_number:I~'\''VF%:Eh-uy]@6Q i8Ur], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN I~'\''VF%:Eh-uy]@6Q i8Ur 00:57:57.059 request: 00:57:57.059 { 00:57:57.059 "method": "nvmf_create_subsystem", 00:57:57.059 "params": { 00:57:57.059 "nqn": "nqn.2016-06.io.spdk:cnode987", 00:57:57.059 "serial_number": "I~'\''VF%:Eh-uy]@6Q i8Ur" 00:57:57.059 } 00:57:57.059 } 00:57:57.059 Got JSON-RPC error response 00:57:57.059 GoRPCClient: error on JSON-RPC call' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/09 05:56:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode987 serial_number:I~'VF%:Eh-uy]@6Q i8Ur], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN I~'VF%:Eh-uy]@6Q i8Ur 00:57:57.059 request: 00:57:57.059 { 00:57:57.059 "method": "nvmf_create_subsystem", 00:57:57.059 "params": { 00:57:57.059 "nqn": "nqn.2016-06.io.spdk:cnode987", 00:57:57.059 "serial_number": "I~'VF%:Eh-uy]@6Q i8Ur" 00:57:57.059 } 00:57:57.059 } 00:57:57.059 Got JSON-RPC error response 00:57:57.059 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.059 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:57:57.318 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:57:57.319 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:57:57.319 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'O} 9A9f-ow3^N`H0_g/FK\\II62s;&,L;+@ DYYa>' 00:57:57.319 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'O} 9A9f-ow3^N`H0_g/FK\\II62s;&,L;+@ DYYa>' nqn.2016-06.io.spdk:cnode30535 00:57:57.577 [2024-12-09 05:56:51.972775] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30535: invalid model number 'O} 9A9f-ow3^N`H0_g/FK\\II62s;&,L;+@ DYYa>' 00:57:57.577 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/09 05:56:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:O} 9A9f-ow3^N`H0_g/FK\\II62s;&,L;+@ DYYa> nqn:nqn.2016-06.io.spdk:cnode30535], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN O} 9A9f-ow3^N`H0_g/FK\\II62s;&,L;+@ DYYa> 00:57:57.577 request: 00:57:57.577 { 00:57:57.577 "method": "nvmf_create_subsystem", 00:57:57.577 "params": { 00:57:57.577 "nqn": "nqn.2016-06.io.spdk:cnode30535", 00:57:57.577 "model_number": "O} 9A9f-ow3^N`H0_g/FK\\\\II62s;&,L;+@ DYYa>" 00:57:57.577 } 00:57:57.577 } 00:57:57.577 Got JSON-RPC error response 00:57:57.577 GoRPCClient: error on JSON-RPC call' 00:57:57.577 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/09 05:56:51 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:O} 9A9f-ow3^N`H0_g/FK\\II62s;&,L;+@ DYYa> nqn:nqn.2016-06.io.spdk:cnode30535], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN O} 9A9f-ow3^N`H0_g/FK\\II62s;&,L;+@ DYYa> 00:57:57.577 request: 00:57:57.577 { 00:57:57.577 "method": "nvmf_create_subsystem", 00:57:57.577 "params": { 00:57:57.577 "nqn": "nqn.2016-06.io.spdk:cnode30535", 00:57:57.577 "model_number": "O} 9A9f-ow3^N`H0_g/FK\\\\II62s;&,L;+@ DYYa>" 00:57:57.577 } 00:57:57.577 } 00:57:57.577 Got JSON-RPC error response 00:57:57.577 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:57:57.577 05:56:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:57:57.836 [2024-12-09 05:56:52.285093] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:57:57.836 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:57:58.094 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:57:58.094 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:57:58.094 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:57:58.094 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:57:58.094 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:57:58.354 [2024-12-09 05:56:52.905613] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:57:58.354 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/09 05:56:52 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:57:58.354 request: 00:57:58.354 { 00:57:58.354 "method": "nvmf_subsystem_remove_listener", 00:57:58.354 "params": { 00:57:58.354 "nqn": "nqn.2016-06.io.spdk:cnode", 00:57:58.354 "listen_address": { 00:57:58.354 "trtype": "tcp", 00:57:58.354 "traddr": "", 00:57:58.354 "trsvcid": "4421" 00:57:58.354 } 00:57:58.354 } 00:57:58.354 } 00:57:58.354 Got JSON-RPC error response 00:57:58.354 GoRPCClient: error on JSON-RPC call' 00:57:58.354 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/09 05:56:52 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:57:58.354 request: 00:57:58.354 { 00:57:58.354 "method": "nvmf_subsystem_remove_listener", 00:57:58.354 "params": { 00:57:58.354 "nqn": "nqn.2016-06.io.spdk:cnode", 00:57:58.354 "listen_address": { 00:57:58.354 "trtype": "tcp", 00:57:58.354 "traddr": "", 00:57:58.354 "trsvcid": "4421" 00:57:58.354 } 00:57:58.354 } 00:57:58.354 } 00:57:58.354 Got JSON-RPC error response 00:57:58.354 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:57:58.354 05:56:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30601 -i 0 00:57:58.621 [2024-12-09 05:56:53.197894] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30601: invalid cntlid range [0-65519] 00:57:58.892 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/09 05:56:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30601], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:57:58.892 request: 00:57:58.892 { 00:57:58.892 "method": "nvmf_create_subsystem", 00:57:58.892 "params": { 00:57:58.892 "nqn": "nqn.2016-06.io.spdk:cnode30601", 00:57:58.892 "min_cntlid": 0 00:57:58.892 } 00:57:58.892 } 00:57:58.892 Got JSON-RPC error response 00:57:58.892 GoRPCClient: error on JSON-RPC call' 00:57:58.892 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/09 05:56:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30601], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:57:58.892 request: 00:57:58.892 { 00:57:58.892 "method": "nvmf_create_subsystem", 00:57:58.892 "params": { 00:57:58.892 "nqn": "nqn.2016-06.io.spdk:cnode30601", 00:57:58.892 "min_cntlid": 0 00:57:58.892 } 00:57:58.892 } 00:57:58.892 Got JSON-RPC error response 00:57:58.892 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:57:58.892 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8084 -i 65520 00:57:58.892 [2024-12-09 05:56:53.438132] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8084: invalid cntlid range [65520-65519] 00:57:58.892 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/09 05:56:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode8084], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:57:58.892 request: 00:57:58.892 { 00:57:58.892 "method": "nvmf_create_subsystem", 00:57:58.892 "params": { 00:57:58.892 "nqn": "nqn.2016-06.io.spdk:cnode8084", 00:57:58.892 "min_cntlid": 65520 00:57:58.892 } 00:57:58.892 } 00:57:58.892 Got JSON-RPC error response 00:57:58.892 GoRPCClient: error on JSON-RPC call' 00:57:58.892 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/09 05:56:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode8084], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:57:58.892 request: 00:57:58.892 { 00:57:58.892 "method": "nvmf_create_subsystem", 00:57:58.892 "params": { 00:57:58.892 "nqn": "nqn.2016-06.io.spdk:cnode8084", 00:57:58.892 "min_cntlid": 65520 00:57:58.892 } 00:57:58.892 } 00:57:58.892 Got JSON-RPC error response 00:57:58.892 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:57:58.892 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2180 -I 0 00:57:59.165 [2024-12-09 05:56:53.724894] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2180: invalid cntlid range [1-0] 00:57:59.423 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/09 05:56:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2180], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:57:59.423 request: 00:57:59.423 { 00:57:59.423 "method": "nvmf_create_subsystem", 00:57:59.423 "params": { 00:57:59.423 "nqn": "nqn.2016-06.io.spdk:cnode2180", 00:57:59.423 "max_cntlid": 0 00:57:59.423 } 00:57:59.423 } 00:57:59.423 Got JSON-RPC error response 00:57:59.423 GoRPCClient: error on JSON-RPC call' 00:57:59.423 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/09 05:56:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2180], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:57:59.423 request: 00:57:59.423 { 00:57:59.423 "method": "nvmf_create_subsystem", 00:57:59.423 "params": { 00:57:59.423 "nqn": "nqn.2016-06.io.spdk:cnode2180", 00:57:59.423 "max_cntlid": 0 00:57:59.423 } 00:57:59.423 } 00:57:59.423 Got JSON-RPC error response 00:57:59.423 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:57:59.423 05:56:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30007 -I 65520 00:57:59.681 [2024-12-09 05:56:54.045193] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30007: invalid cntlid range [1-65520] 00:57:59.681 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/09 05:56:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode30007], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:57:59.681 request: 00:57:59.681 { 00:57:59.681 "method": "nvmf_create_subsystem", 00:57:59.681 "params": { 00:57:59.681 "nqn": "nqn.2016-06.io.spdk:cnode30007", 00:57:59.681 "max_cntlid": 65520 00:57:59.681 } 00:57:59.681 } 00:57:59.681 Got JSON-RPC error response 00:57:59.681 GoRPCClient: error on JSON-RPC call' 00:57:59.681 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/09 05:56:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode30007], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:57:59.681 request: 00:57:59.681 { 00:57:59.681 "method": "nvmf_create_subsystem", 00:57:59.681 "params": { 00:57:59.681 "nqn": "nqn.2016-06.io.spdk:cnode30007", 00:57:59.681 "max_cntlid": 65520 00:57:59.681 } 00:57:59.681 } 00:57:59.681 Got JSON-RPC error response 00:57:59.681 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:57:59.681 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2484 -i 6 -I 5 00:57:59.937 [2024-12-09 05:56:54.305385] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2484: invalid cntlid range [6-5] 00:57:59.937 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/09 05:56:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode2484], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:57:59.937 request: 00:57:59.937 { 00:57:59.937 "method": "nvmf_create_subsystem", 00:57:59.937 "params": { 00:57:59.937 "nqn": "nqn.2016-06.io.spdk:cnode2484", 00:57:59.937 "min_cntlid": 6, 00:57:59.937 "max_cntlid": 5 00:57:59.937 } 00:57:59.937 } 00:57:59.937 Got JSON-RPC error response 00:57:59.937 GoRPCClient: error on JSON-RPC call' 00:57:59.937 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/09 05:56:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode2484], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:57:59.937 request: 00:57:59.937 { 00:57:59.937 "method": "nvmf_create_subsystem", 00:57:59.937 "params": { 00:57:59.937 "nqn": "nqn.2016-06.io.spdk:cnode2484", 00:57:59.937 "min_cntlid": 6, 00:57:59.937 "max_cntlid": 5 00:57:59.937 } 00:57:59.937 } 00:57:59.937 Got JSON-RPC error response 00:57:59.937 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:57:59.937 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:57:59.937 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:57:59.937 { 00:57:59.938 "name": "foobar", 00:57:59.938 "method": "nvmf_delete_target", 00:57:59.938 "req_id": 1 00:57:59.938 } 00:57:59.938 Got JSON-RPC error response 00:57:59.938 response: 00:57:59.938 { 00:57:59.938 "code": -32602, 00:57:59.938 "message": "The specified target doesn'\''t exist, cannot delete it." 00:57:59.938 }' 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:57:59.938 { 00:57:59.938 "name": "foobar", 00:57:59.938 "method": "nvmf_delete_target", 00:57:59.938 "req_id": 1 00:57:59.938 } 00:57:59.938 Got JSON-RPC error response 00:57:59.938 response: 00:57:59.938 { 00:57:59.938 "code": -32602, 00:57:59.938 "message": "The specified target doesn't exist, cannot delete it." 00:57:59.938 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:57:59.938 rmmod nvme_tcp 00:57:59.938 rmmod nvme_fabrics 00:57:59.938 rmmod nvme_keyring 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 73903 ']' 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 73903 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 73903 ']' 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 73903 00:57:59.938 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73903 00:58:00.195 killing process with pid 73903 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73903' 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 73903 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 73903 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:58:00.195 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:58:00.454 ************************************ 00:58:00.454 END TEST nvmf_invalid 00:58:00.454 ************************************ 00:58:00.454 00:58:00.454 real 0m5.744s 00:58:00.454 user 0m22.394s 00:58:00.454 sys 0m1.301s 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:58:00.454 ************************************ 00:58:00.454 START TEST nvmf_connect_stress 00:58:00.454 ************************************ 00:58:00.454 05:56:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:58:00.454 * Looking for test storage... 00:58:00.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:00.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:00.719 --rc genhtml_branch_coverage=1 00:58:00.719 --rc genhtml_function_coverage=1 00:58:00.719 --rc genhtml_legend=1 00:58:00.719 --rc geninfo_all_blocks=1 00:58:00.719 --rc geninfo_unexecuted_blocks=1 00:58:00.719 00:58:00.719 ' 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:00.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:00.719 --rc genhtml_branch_coverage=1 00:58:00.719 --rc genhtml_function_coverage=1 00:58:00.719 --rc genhtml_legend=1 00:58:00.719 --rc geninfo_all_blocks=1 00:58:00.719 --rc geninfo_unexecuted_blocks=1 00:58:00.719 00:58:00.719 ' 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:00.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:00.719 --rc genhtml_branch_coverage=1 00:58:00.719 --rc genhtml_function_coverage=1 00:58:00.719 --rc genhtml_legend=1 00:58:00.719 --rc geninfo_all_blocks=1 00:58:00.719 --rc geninfo_unexecuted_blocks=1 00:58:00.719 00:58:00.719 ' 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:00.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:00.719 --rc genhtml_branch_coverage=1 00:58:00.719 --rc genhtml_function_coverage=1 00:58:00.719 --rc genhtml_legend=1 00:58:00.719 --rc geninfo_all_blocks=1 00:58:00.719 --rc geninfo_unexecuted_blocks=1 00:58:00.719 00:58:00.719 ' 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:58:00.719 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:00.720 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:58:00.720 Cannot find device "nvmf_init_br" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:58:00.720 Cannot find device "nvmf_init_br2" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:58:00.720 Cannot find device "nvmf_tgt_br" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:58:00.720 Cannot find device "nvmf_tgt_br2" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:58:00.720 Cannot find device "nvmf_init_br" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:58:00.720 Cannot find device "nvmf_init_br2" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:58:00.720 Cannot find device "nvmf_tgt_br" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:58:00.720 Cannot find device "nvmf_tgt_br2" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:58:00.720 Cannot find device "nvmf_br" 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:58:00.720 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:58:00.720 Cannot find device "nvmf_init_if" 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:58:00.983 Cannot find device "nvmf_init_if2" 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:00.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:00.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:58:00.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:58:00.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.129 ms 00:58:00.983 00:58:00.983 --- 10.0.0.3 ping statistics --- 00:58:00.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:00.983 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:58:00.983 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:58:00.983 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:58:00.983 00:58:00.983 --- 10.0.0.4 ping statistics --- 00:58:00.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:00.983 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:58:00.983 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:58:01.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:01.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:58:01.241 00:58:01.241 --- 10.0.0.1 ping statistics --- 00:58:01.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:01.241 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:58:01.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:01.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:58:01.241 00:58:01.241 --- 10.0.0.2 ping statistics --- 00:58:01.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:01.241 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=74443 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 74443 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 74443 ']' 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:01.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:01.241 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:01.241 [2024-12-09 05:56:55.668135] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:58:01.241 [2024-12-09 05:56:55.668224] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:01.241 [2024-12-09 05:56:55.822381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:58:01.501 [2024-12-09 05:56:55.862621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:01.501 [2024-12-09 05:56:55.862693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:01.501 [2024-12-09 05:56:55.862709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:01.501 [2024-12-09 05:56:55.862719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:01.501 [2024-12-09 05:56:55.862727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:01.501 [2024-12-09 05:56:55.863685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:58:01.501 [2024-12-09 05:56:55.863797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:58:01.501 [2024-12-09 05:56:55.863806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:01.501 05:56:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:01.501 [2024-12-09 05:56:56.004012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:01.501 [2024-12-09 05:56:56.022164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:01.501 NULL1 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74487 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.501 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:01.759 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:01.760 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:01.760 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:02.017 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:02.017 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:02.017 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:02.017 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:02.017 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:02.275 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:02.275 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:02.275 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:02.275 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:02.276 05:56:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:02.534 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:02.534 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:02.534 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:02.534 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:02.534 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:03.102 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:03.102 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:03.102 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:03.102 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:03.102 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:03.361 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:03.361 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:03.361 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:03.361 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:03.361 05:56:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:03.619 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:03.619 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:03.619 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:03.619 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:03.619 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:03.877 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:03.878 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:03.878 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:03.878 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:03.878 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:04.136 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:04.136 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:04.136 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:04.136 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:04.136 05:56:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:04.703 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:04.703 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:04.703 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:04.703 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:04.703 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:04.961 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:04.961 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:04.961 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:04.961 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:04.961 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:05.220 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:05.220 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:05.220 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:05.220 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:05.220 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:05.497 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:05.497 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:05.497 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:05.497 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:05.497 05:56:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:05.755 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:05.755 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:05.755 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:05.756 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:05.756 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:06.323 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:06.323 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:06.323 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:06.323 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:06.323 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:06.581 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:06.581 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:06.581 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:06.581 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:06.581 05:57:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:06.840 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:06.840 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:06.840 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:06.840 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:06.840 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:07.098 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.098 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:07.098 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:07.098 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.098 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:07.357 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.357 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:07.357 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:07.357 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.357 05:57:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:07.924 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:07.924 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:07.924 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:07.924 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:07.924 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:08.183 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:08.183 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:08.183 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:08.183 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:08.183 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:08.442 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:08.442 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:08.443 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:08.443 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:08.443 05:57:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:08.701 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:08.701 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:08.701 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:08.701 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:08.701 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:08.960 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:08.960 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:08.960 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:08.960 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:08.960 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:09.526 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:09.526 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:09.526 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:09.526 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:09.526 05:57:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:09.783 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:09.783 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:09.783 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:09.783 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:09.783 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:10.041 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:10.041 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:10.041 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:10.041 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:10.041 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:10.298 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:10.298 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:10.298 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:10.298 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:10.298 05:57:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:10.554 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:10.554 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:10.554 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:10.554 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:10.554 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:11.118 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:11.118 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:11.118 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:11.118 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:11.118 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:11.375 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:11.375 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:11.375 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:11.375 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:11.375 05:57:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:11.632 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:11.632 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:11.632 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:58:11.632 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:11.632 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:11.889 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74487 00:58:11.889 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74487) - No such process 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74487 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:11.889 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:11.889 rmmod nvme_tcp 00:58:12.147 rmmod nvme_fabrics 00:58:12.147 rmmod nvme_keyring 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 74443 ']' 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 74443 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 74443 ']' 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 74443 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74443 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:58:12.147 killing process with pid 74443 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74443' 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 74443 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 74443 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:58:12.147 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:58:12.405 ************************************ 00:58:12.405 END TEST nvmf_connect_stress 00:58:12.405 ************************************ 00:58:12.405 00:58:12.405 real 0m11.967s 00:58:12.405 user 0m38.889s 00:58:12.405 sys 0m3.465s 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:58:12.405 ************************************ 00:58:12.405 START TEST nvmf_fused_ordering 00:58:12.405 ************************************ 00:58:12.405 05:57:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:58:12.664 * Looking for test storage... 00:58:12.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:12.664 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:12.665 --rc genhtml_branch_coverage=1 00:58:12.665 --rc genhtml_function_coverage=1 00:58:12.665 --rc genhtml_legend=1 00:58:12.665 --rc geninfo_all_blocks=1 00:58:12.665 --rc geninfo_unexecuted_blocks=1 00:58:12.665 00:58:12.665 ' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:12.665 --rc genhtml_branch_coverage=1 00:58:12.665 --rc genhtml_function_coverage=1 00:58:12.665 --rc genhtml_legend=1 00:58:12.665 --rc geninfo_all_blocks=1 00:58:12.665 --rc geninfo_unexecuted_blocks=1 00:58:12.665 00:58:12.665 ' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:12.665 --rc genhtml_branch_coverage=1 00:58:12.665 --rc genhtml_function_coverage=1 00:58:12.665 --rc genhtml_legend=1 00:58:12.665 --rc geninfo_all_blocks=1 00:58:12.665 --rc geninfo_unexecuted_blocks=1 00:58:12.665 00:58:12.665 ' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:12.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:12.665 --rc genhtml_branch_coverage=1 00:58:12.665 --rc genhtml_function_coverage=1 00:58:12.665 --rc genhtml_legend=1 00:58:12.665 --rc geninfo_all_blocks=1 00:58:12.665 --rc geninfo_unexecuted_blocks=1 00:58:12.665 00:58:12.665 ' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:12.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:58:12.665 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:58:12.666 Cannot find device "nvmf_init_br" 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:58:12.666 Cannot find device "nvmf_init_br2" 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:58:12.666 Cannot find device "nvmf_tgt_br" 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:58:12.666 Cannot find device "nvmf_tgt_br2" 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:58:12.666 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:58:12.926 Cannot find device "nvmf_init_br" 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:58:12.926 Cannot find device "nvmf_init_br2" 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:58:12.926 Cannot find device "nvmf_tgt_br" 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:58:12.926 Cannot find device "nvmf_tgt_br2" 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:58:12.926 Cannot find device "nvmf_br" 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:58:12.926 Cannot find device "nvmf_init_if" 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:58:12.926 Cannot find device "nvmf_init_if2" 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:12.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:12.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:58:12.926 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:58:12.927 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:58:12.927 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:58:12.927 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:58:13.185 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:58:13.185 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:58:13.185 00:58:13.185 --- 10.0.0.3 ping statistics --- 00:58:13.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:13.185 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:58:13.185 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:58:13.185 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:58:13.185 00:58:13.185 --- 10.0.0.4 ping statistics --- 00:58:13.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:13.185 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:58:13.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:13.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:58:13.185 00:58:13.185 --- 10.0.0.1 ping statistics --- 00:58:13.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:13.185 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:58:13.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:13.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:58:13.185 00:58:13.185 --- 10.0.0.2 ping statistics --- 00:58:13.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:13.185 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=74862 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 74862 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 74862 ']' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:13.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:13.185 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.185 [2024-12-09 05:57:07.621368] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:58:13.185 [2024-12-09 05:57:07.621428] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:13.185 [2024-12-09 05:57:07.757852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:13.444 [2024-12-09 05:57:07.787802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:13.445 [2024-12-09 05:57:07.787846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:13.445 [2024-12-09 05:57:07.787872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:13.445 [2024-12-09 05:57:07.787878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:13.445 [2024-12-09 05:57:07.787884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:13.445 [2024-12-09 05:57:07.788194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.445 [2024-12-09 05:57:07.949715] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.445 [2024-12-09 05:57:07.965864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.445 NULL1 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:13.445 05:57:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:58:13.445 [2024-12-09 05:57:08.018629] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:58:13.445 [2024-12-09 05:57:08.018693] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74904 ] 00:58:14.014 Attached to nqn.2016-06.io.spdk:cnode1 00:58:14.014 Namespace ID: 1 size: 1GB 00:58:14.014 fused_ordering(0) 00:58:14.014 fused_ordering(1) 00:58:14.014 fused_ordering(2) 00:58:14.014 fused_ordering(3) 00:58:14.014 fused_ordering(4) 00:58:14.014 fused_ordering(5) 00:58:14.014 fused_ordering(6) 00:58:14.014 fused_ordering(7) 00:58:14.014 fused_ordering(8) 00:58:14.014 fused_ordering(9) 00:58:14.014 fused_ordering(10) 00:58:14.014 fused_ordering(11) 00:58:14.014 fused_ordering(12) 00:58:14.014 fused_ordering(13) 00:58:14.014 fused_ordering(14) 00:58:14.014 fused_ordering(15) 00:58:14.014 fused_ordering(16) 00:58:14.014 fused_ordering(17) 00:58:14.014 fused_ordering(18) 00:58:14.014 fused_ordering(19) 00:58:14.014 fused_ordering(20) 00:58:14.014 fused_ordering(21) 00:58:14.014 fused_ordering(22) 00:58:14.014 fused_ordering(23) 00:58:14.014 fused_ordering(24) 00:58:14.014 fused_ordering(25) 00:58:14.014 fused_ordering(26) 00:58:14.014 fused_ordering(27) 00:58:14.014 fused_ordering(28) 00:58:14.014 fused_ordering(29) 00:58:14.014 fused_ordering(30) 00:58:14.014 fused_ordering(31) 00:58:14.014 fused_ordering(32) 00:58:14.014 fused_ordering(33) 00:58:14.014 fused_ordering(34) 00:58:14.014 fused_ordering(35) 00:58:14.014 fused_ordering(36) 00:58:14.014 fused_ordering(37) 00:58:14.014 fused_ordering(38) 00:58:14.014 fused_ordering(39) 00:58:14.014 fused_ordering(40) 00:58:14.014 fused_ordering(41) 00:58:14.014 fused_ordering(42) 00:58:14.014 fused_ordering(43) 00:58:14.014 fused_ordering(44) 00:58:14.014 fused_ordering(45) 00:58:14.014 fused_ordering(46) 00:58:14.014 fused_ordering(47) 00:58:14.014 fused_ordering(48) 00:58:14.014 fused_ordering(49) 00:58:14.014 fused_ordering(50) 00:58:14.014 fused_ordering(51) 00:58:14.014 fused_ordering(52) 00:58:14.014 fused_ordering(53) 00:58:14.014 fused_ordering(54) 00:58:14.014 fused_ordering(55) 00:58:14.014 fused_ordering(56) 00:58:14.014 fused_ordering(57) 00:58:14.014 fused_ordering(58) 00:58:14.014 fused_ordering(59) 00:58:14.014 fused_ordering(60) 00:58:14.014 fused_ordering(61) 00:58:14.014 fused_ordering(62) 00:58:14.014 fused_ordering(63) 00:58:14.014 fused_ordering(64) 00:58:14.014 fused_ordering(65) 00:58:14.014 fused_ordering(66) 00:58:14.014 fused_ordering(67) 00:58:14.014 fused_ordering(68) 00:58:14.014 fused_ordering(69) 00:58:14.014 fused_ordering(70) 00:58:14.014 fused_ordering(71) 00:58:14.014 fused_ordering(72) 00:58:14.014 fused_ordering(73) 00:58:14.014 fused_ordering(74) 00:58:14.014 fused_ordering(75) 00:58:14.014 fused_ordering(76) 00:58:14.014 fused_ordering(77) 00:58:14.014 fused_ordering(78) 00:58:14.014 fused_ordering(79) 00:58:14.014 fused_ordering(80) 00:58:14.014 fused_ordering(81) 00:58:14.014 fused_ordering(82) 00:58:14.014 fused_ordering(83) 00:58:14.014 fused_ordering(84) 00:58:14.014 fused_ordering(85) 00:58:14.014 fused_ordering(86) 00:58:14.014 fused_ordering(87) 00:58:14.014 fused_ordering(88) 00:58:14.014 fused_ordering(89) 00:58:14.014 fused_ordering(90) 00:58:14.014 fused_ordering(91) 00:58:14.014 fused_ordering(92) 00:58:14.014 fused_ordering(93) 00:58:14.014 fused_ordering(94) 00:58:14.014 fused_ordering(95) 00:58:14.014 fused_ordering(96) 00:58:14.014 fused_ordering(97) 00:58:14.014 fused_ordering(98) 00:58:14.014 fused_ordering(99) 00:58:14.014 fused_ordering(100) 00:58:14.014 fused_ordering(101) 00:58:14.014 fused_ordering(102) 00:58:14.014 fused_ordering(103) 00:58:14.014 fused_ordering(104) 00:58:14.014 fused_ordering(105) 00:58:14.014 fused_ordering(106) 00:58:14.014 fused_ordering(107) 00:58:14.014 fused_ordering(108) 00:58:14.014 fused_ordering(109) 00:58:14.014 fused_ordering(110) 00:58:14.014 fused_ordering(111) 00:58:14.014 fused_ordering(112) 00:58:14.014 fused_ordering(113) 00:58:14.014 fused_ordering(114) 00:58:14.014 fused_ordering(115) 00:58:14.014 fused_ordering(116) 00:58:14.014 fused_ordering(117) 00:58:14.014 fused_ordering(118) 00:58:14.014 fused_ordering(119) 00:58:14.014 fused_ordering(120) 00:58:14.014 fused_ordering(121) 00:58:14.014 fused_ordering(122) 00:58:14.014 fused_ordering(123) 00:58:14.014 fused_ordering(124) 00:58:14.014 fused_ordering(125) 00:58:14.014 fused_ordering(126) 00:58:14.014 fused_ordering(127) 00:58:14.014 fused_ordering(128) 00:58:14.014 fused_ordering(129) 00:58:14.014 fused_ordering(130) 00:58:14.014 fused_ordering(131) 00:58:14.014 fused_ordering(132) 00:58:14.014 fused_ordering(133) 00:58:14.014 fused_ordering(134) 00:58:14.014 fused_ordering(135) 00:58:14.014 fused_ordering(136) 00:58:14.014 fused_ordering(137) 00:58:14.014 fused_ordering(138) 00:58:14.014 fused_ordering(139) 00:58:14.014 fused_ordering(140) 00:58:14.014 fused_ordering(141) 00:58:14.014 fused_ordering(142) 00:58:14.014 fused_ordering(143) 00:58:14.014 fused_ordering(144) 00:58:14.014 fused_ordering(145) 00:58:14.014 fused_ordering(146) 00:58:14.014 fused_ordering(147) 00:58:14.014 fused_ordering(148) 00:58:14.014 fused_ordering(149) 00:58:14.014 fused_ordering(150) 00:58:14.014 fused_ordering(151) 00:58:14.014 fused_ordering(152) 00:58:14.014 fused_ordering(153) 00:58:14.014 fused_ordering(154) 00:58:14.014 fused_ordering(155) 00:58:14.014 fused_ordering(156) 00:58:14.014 fused_ordering(157) 00:58:14.014 fused_ordering(158) 00:58:14.014 fused_ordering(159) 00:58:14.014 fused_ordering(160) 00:58:14.014 fused_ordering(161) 00:58:14.014 fused_ordering(162) 00:58:14.014 fused_ordering(163) 00:58:14.014 fused_ordering(164) 00:58:14.014 fused_ordering(165) 00:58:14.014 fused_ordering(166) 00:58:14.014 fused_ordering(167) 00:58:14.014 fused_ordering(168) 00:58:14.014 fused_ordering(169) 00:58:14.014 fused_ordering(170) 00:58:14.014 fused_ordering(171) 00:58:14.014 fused_ordering(172) 00:58:14.014 fused_ordering(173) 00:58:14.014 fused_ordering(174) 00:58:14.014 fused_ordering(175) 00:58:14.014 fused_ordering(176) 00:58:14.014 fused_ordering(177) 00:58:14.014 fused_ordering(178) 00:58:14.014 fused_ordering(179) 00:58:14.014 fused_ordering(180) 00:58:14.014 fused_ordering(181) 00:58:14.014 fused_ordering(182) 00:58:14.015 fused_ordering(183) 00:58:14.015 fused_ordering(184) 00:58:14.015 fused_ordering(185) 00:58:14.015 fused_ordering(186) 00:58:14.015 fused_ordering(187) 00:58:14.015 fused_ordering(188) 00:58:14.015 fused_ordering(189) 00:58:14.015 fused_ordering(190) 00:58:14.015 fused_ordering(191) 00:58:14.015 fused_ordering(192) 00:58:14.015 fused_ordering(193) 00:58:14.015 fused_ordering(194) 00:58:14.015 fused_ordering(195) 00:58:14.015 fused_ordering(196) 00:58:14.015 fused_ordering(197) 00:58:14.015 fused_ordering(198) 00:58:14.015 fused_ordering(199) 00:58:14.015 fused_ordering(200) 00:58:14.015 fused_ordering(201) 00:58:14.015 fused_ordering(202) 00:58:14.015 fused_ordering(203) 00:58:14.015 fused_ordering(204) 00:58:14.015 fused_ordering(205) 00:58:14.274 fused_ordering(206) 00:58:14.274 fused_ordering(207) 00:58:14.274 fused_ordering(208) 00:58:14.274 fused_ordering(209) 00:58:14.274 fused_ordering(210) 00:58:14.274 fused_ordering(211) 00:58:14.274 fused_ordering(212) 00:58:14.274 fused_ordering(213) 00:58:14.274 fused_ordering(214) 00:58:14.274 fused_ordering(215) 00:58:14.274 fused_ordering(216) 00:58:14.274 fused_ordering(217) 00:58:14.274 fused_ordering(218) 00:58:14.274 fused_ordering(219) 00:58:14.274 fused_ordering(220) 00:58:14.274 fused_ordering(221) 00:58:14.274 fused_ordering(222) 00:58:14.274 fused_ordering(223) 00:58:14.274 fused_ordering(224) 00:58:14.274 fused_ordering(225) 00:58:14.274 fused_ordering(226) 00:58:14.274 fused_ordering(227) 00:58:14.274 fused_ordering(228) 00:58:14.274 fused_ordering(229) 00:58:14.274 fused_ordering(230) 00:58:14.274 fused_ordering(231) 00:58:14.274 fused_ordering(232) 00:58:14.274 fused_ordering(233) 00:58:14.274 fused_ordering(234) 00:58:14.274 fused_ordering(235) 00:58:14.274 fused_ordering(236) 00:58:14.274 fused_ordering(237) 00:58:14.274 fused_ordering(238) 00:58:14.274 fused_ordering(239) 00:58:14.274 fused_ordering(240) 00:58:14.274 fused_ordering(241) 00:58:14.274 fused_ordering(242) 00:58:14.274 fused_ordering(243) 00:58:14.274 fused_ordering(244) 00:58:14.274 fused_ordering(245) 00:58:14.274 fused_ordering(246) 00:58:14.274 fused_ordering(247) 00:58:14.274 fused_ordering(248) 00:58:14.274 fused_ordering(249) 00:58:14.274 fused_ordering(250) 00:58:14.274 fused_ordering(251) 00:58:14.274 fused_ordering(252) 00:58:14.274 fused_ordering(253) 00:58:14.274 fused_ordering(254) 00:58:14.274 fused_ordering(255) 00:58:14.274 fused_ordering(256) 00:58:14.274 fused_ordering(257) 00:58:14.274 fused_ordering(258) 00:58:14.274 fused_ordering(259) 00:58:14.274 fused_ordering(260) 00:58:14.274 fused_ordering(261) 00:58:14.274 fused_ordering(262) 00:58:14.274 fused_ordering(263) 00:58:14.274 fused_ordering(264) 00:58:14.274 fused_ordering(265) 00:58:14.274 fused_ordering(266) 00:58:14.274 fused_ordering(267) 00:58:14.274 fused_ordering(268) 00:58:14.274 fused_ordering(269) 00:58:14.274 fused_ordering(270) 00:58:14.274 fused_ordering(271) 00:58:14.274 fused_ordering(272) 00:58:14.274 fused_ordering(273) 00:58:14.274 fused_ordering(274) 00:58:14.274 fused_ordering(275) 00:58:14.274 fused_ordering(276) 00:58:14.274 fused_ordering(277) 00:58:14.274 fused_ordering(278) 00:58:14.274 fused_ordering(279) 00:58:14.274 fused_ordering(280) 00:58:14.274 fused_ordering(281) 00:58:14.274 fused_ordering(282) 00:58:14.274 fused_ordering(283) 00:58:14.274 fused_ordering(284) 00:58:14.274 fused_ordering(285) 00:58:14.274 fused_ordering(286) 00:58:14.274 fused_ordering(287) 00:58:14.274 fused_ordering(288) 00:58:14.274 fused_ordering(289) 00:58:14.274 fused_ordering(290) 00:58:14.274 fused_ordering(291) 00:58:14.274 fused_ordering(292) 00:58:14.274 fused_ordering(293) 00:58:14.274 fused_ordering(294) 00:58:14.274 fused_ordering(295) 00:58:14.274 fused_ordering(296) 00:58:14.274 fused_ordering(297) 00:58:14.274 fused_ordering(298) 00:58:14.274 fused_ordering(299) 00:58:14.274 fused_ordering(300) 00:58:14.274 fused_ordering(301) 00:58:14.274 fused_ordering(302) 00:58:14.274 fused_ordering(303) 00:58:14.274 fused_ordering(304) 00:58:14.274 fused_ordering(305) 00:58:14.274 fused_ordering(306) 00:58:14.274 fused_ordering(307) 00:58:14.274 fused_ordering(308) 00:58:14.274 fused_ordering(309) 00:58:14.274 fused_ordering(310) 00:58:14.274 fused_ordering(311) 00:58:14.274 fused_ordering(312) 00:58:14.274 fused_ordering(313) 00:58:14.274 fused_ordering(314) 00:58:14.274 fused_ordering(315) 00:58:14.274 fused_ordering(316) 00:58:14.274 fused_ordering(317) 00:58:14.274 fused_ordering(318) 00:58:14.274 fused_ordering(319) 00:58:14.274 fused_ordering(320) 00:58:14.274 fused_ordering(321) 00:58:14.274 fused_ordering(322) 00:58:14.274 fused_ordering(323) 00:58:14.274 fused_ordering(324) 00:58:14.274 fused_ordering(325) 00:58:14.274 fused_ordering(326) 00:58:14.274 fused_ordering(327) 00:58:14.274 fused_ordering(328) 00:58:14.274 fused_ordering(329) 00:58:14.274 fused_ordering(330) 00:58:14.274 fused_ordering(331) 00:58:14.274 fused_ordering(332) 00:58:14.274 fused_ordering(333) 00:58:14.274 fused_ordering(334) 00:58:14.274 fused_ordering(335) 00:58:14.274 fused_ordering(336) 00:58:14.274 fused_ordering(337) 00:58:14.274 fused_ordering(338) 00:58:14.274 fused_ordering(339) 00:58:14.274 fused_ordering(340) 00:58:14.274 fused_ordering(341) 00:58:14.274 fused_ordering(342) 00:58:14.274 fused_ordering(343) 00:58:14.274 fused_ordering(344) 00:58:14.274 fused_ordering(345) 00:58:14.274 fused_ordering(346) 00:58:14.274 fused_ordering(347) 00:58:14.274 fused_ordering(348) 00:58:14.274 fused_ordering(349) 00:58:14.274 fused_ordering(350) 00:58:14.274 fused_ordering(351) 00:58:14.274 fused_ordering(352) 00:58:14.274 fused_ordering(353) 00:58:14.274 fused_ordering(354) 00:58:14.274 fused_ordering(355) 00:58:14.274 fused_ordering(356) 00:58:14.274 fused_ordering(357) 00:58:14.274 fused_ordering(358) 00:58:14.274 fused_ordering(359) 00:58:14.274 fused_ordering(360) 00:58:14.274 fused_ordering(361) 00:58:14.274 fused_ordering(362) 00:58:14.274 fused_ordering(363) 00:58:14.274 fused_ordering(364) 00:58:14.274 fused_ordering(365) 00:58:14.274 fused_ordering(366) 00:58:14.274 fused_ordering(367) 00:58:14.274 fused_ordering(368) 00:58:14.275 fused_ordering(369) 00:58:14.275 fused_ordering(370) 00:58:14.275 fused_ordering(371) 00:58:14.275 fused_ordering(372) 00:58:14.275 fused_ordering(373) 00:58:14.275 fused_ordering(374) 00:58:14.275 fused_ordering(375) 00:58:14.275 fused_ordering(376) 00:58:14.275 fused_ordering(377) 00:58:14.275 fused_ordering(378) 00:58:14.275 fused_ordering(379) 00:58:14.275 fused_ordering(380) 00:58:14.275 fused_ordering(381) 00:58:14.275 fused_ordering(382) 00:58:14.275 fused_ordering(383) 00:58:14.275 fused_ordering(384) 00:58:14.275 fused_ordering(385) 00:58:14.275 fused_ordering(386) 00:58:14.275 fused_ordering(387) 00:58:14.275 fused_ordering(388) 00:58:14.275 fused_ordering(389) 00:58:14.275 fused_ordering(390) 00:58:14.275 fused_ordering(391) 00:58:14.275 fused_ordering(392) 00:58:14.275 fused_ordering(393) 00:58:14.275 fused_ordering(394) 00:58:14.275 fused_ordering(395) 00:58:14.275 fused_ordering(396) 00:58:14.275 fused_ordering(397) 00:58:14.275 fused_ordering(398) 00:58:14.275 fused_ordering(399) 00:58:14.275 fused_ordering(400) 00:58:14.275 fused_ordering(401) 00:58:14.275 fused_ordering(402) 00:58:14.275 fused_ordering(403) 00:58:14.275 fused_ordering(404) 00:58:14.275 fused_ordering(405) 00:58:14.275 fused_ordering(406) 00:58:14.275 fused_ordering(407) 00:58:14.275 fused_ordering(408) 00:58:14.275 fused_ordering(409) 00:58:14.275 fused_ordering(410) 00:58:14.534 fused_ordering(411) 00:58:14.534 fused_ordering(412) 00:58:14.534 fused_ordering(413) 00:58:14.534 fused_ordering(414) 00:58:14.534 fused_ordering(415) 00:58:14.534 fused_ordering(416) 00:58:14.534 fused_ordering(417) 00:58:14.534 fused_ordering(418) 00:58:14.534 fused_ordering(419) 00:58:14.534 fused_ordering(420) 00:58:14.534 fused_ordering(421) 00:58:14.534 fused_ordering(422) 00:58:14.534 fused_ordering(423) 00:58:14.534 fused_ordering(424) 00:58:14.534 fused_ordering(425) 00:58:14.534 fused_ordering(426) 00:58:14.534 fused_ordering(427) 00:58:14.534 fused_ordering(428) 00:58:14.534 fused_ordering(429) 00:58:14.534 fused_ordering(430) 00:58:14.534 fused_ordering(431) 00:58:14.534 fused_ordering(432) 00:58:14.534 fused_ordering(433) 00:58:14.534 fused_ordering(434) 00:58:14.534 fused_ordering(435) 00:58:14.534 fused_ordering(436) 00:58:14.534 fused_ordering(437) 00:58:14.534 fused_ordering(438) 00:58:14.534 fused_ordering(439) 00:58:14.534 fused_ordering(440) 00:58:14.534 fused_ordering(441) 00:58:14.534 fused_ordering(442) 00:58:14.534 fused_ordering(443) 00:58:14.534 fused_ordering(444) 00:58:14.534 fused_ordering(445) 00:58:14.534 fused_ordering(446) 00:58:14.534 fused_ordering(447) 00:58:14.534 fused_ordering(448) 00:58:14.534 fused_ordering(449) 00:58:14.534 fused_ordering(450) 00:58:14.534 fused_ordering(451) 00:58:14.534 fused_ordering(452) 00:58:14.534 fused_ordering(453) 00:58:14.534 fused_ordering(454) 00:58:14.534 fused_ordering(455) 00:58:14.534 fused_ordering(456) 00:58:14.534 fused_ordering(457) 00:58:14.534 fused_ordering(458) 00:58:14.534 fused_ordering(459) 00:58:14.534 fused_ordering(460) 00:58:14.534 fused_ordering(461) 00:58:14.534 fused_ordering(462) 00:58:14.534 fused_ordering(463) 00:58:14.534 fused_ordering(464) 00:58:14.534 fused_ordering(465) 00:58:14.534 fused_ordering(466) 00:58:14.534 fused_ordering(467) 00:58:14.534 fused_ordering(468) 00:58:14.534 fused_ordering(469) 00:58:14.534 fused_ordering(470) 00:58:14.534 fused_ordering(471) 00:58:14.534 fused_ordering(472) 00:58:14.534 fused_ordering(473) 00:58:14.534 fused_ordering(474) 00:58:14.534 fused_ordering(475) 00:58:14.534 fused_ordering(476) 00:58:14.534 fused_ordering(477) 00:58:14.534 fused_ordering(478) 00:58:14.534 fused_ordering(479) 00:58:14.534 fused_ordering(480) 00:58:14.534 fused_ordering(481) 00:58:14.534 fused_ordering(482) 00:58:14.534 fused_ordering(483) 00:58:14.534 fused_ordering(484) 00:58:14.534 fused_ordering(485) 00:58:14.534 fused_ordering(486) 00:58:14.534 fused_ordering(487) 00:58:14.534 fused_ordering(488) 00:58:14.534 fused_ordering(489) 00:58:14.534 fused_ordering(490) 00:58:14.534 fused_ordering(491) 00:58:14.534 fused_ordering(492) 00:58:14.534 fused_ordering(493) 00:58:14.534 fused_ordering(494) 00:58:14.534 fused_ordering(495) 00:58:14.534 fused_ordering(496) 00:58:14.534 fused_ordering(497) 00:58:14.534 fused_ordering(498) 00:58:14.534 fused_ordering(499) 00:58:14.534 fused_ordering(500) 00:58:14.534 fused_ordering(501) 00:58:14.534 fused_ordering(502) 00:58:14.534 fused_ordering(503) 00:58:14.534 fused_ordering(504) 00:58:14.534 fused_ordering(505) 00:58:14.534 fused_ordering(506) 00:58:14.534 fused_ordering(507) 00:58:14.534 fused_ordering(508) 00:58:14.534 fused_ordering(509) 00:58:14.534 fused_ordering(510) 00:58:14.534 fused_ordering(511) 00:58:14.534 fused_ordering(512) 00:58:14.534 fused_ordering(513) 00:58:14.534 fused_ordering(514) 00:58:14.534 fused_ordering(515) 00:58:14.534 fused_ordering(516) 00:58:14.534 fused_ordering(517) 00:58:14.534 fused_ordering(518) 00:58:14.534 fused_ordering(519) 00:58:14.534 fused_ordering(520) 00:58:14.534 fused_ordering(521) 00:58:14.534 fused_ordering(522) 00:58:14.534 fused_ordering(523) 00:58:14.534 fused_ordering(524) 00:58:14.534 fused_ordering(525) 00:58:14.534 fused_ordering(526) 00:58:14.534 fused_ordering(527) 00:58:14.534 fused_ordering(528) 00:58:14.534 fused_ordering(529) 00:58:14.534 fused_ordering(530) 00:58:14.534 fused_ordering(531) 00:58:14.534 fused_ordering(532) 00:58:14.534 fused_ordering(533) 00:58:14.534 fused_ordering(534) 00:58:14.534 fused_ordering(535) 00:58:14.534 fused_ordering(536) 00:58:14.534 fused_ordering(537) 00:58:14.534 fused_ordering(538) 00:58:14.534 fused_ordering(539) 00:58:14.534 fused_ordering(540) 00:58:14.534 fused_ordering(541) 00:58:14.534 fused_ordering(542) 00:58:14.534 fused_ordering(543) 00:58:14.534 fused_ordering(544) 00:58:14.534 fused_ordering(545) 00:58:14.534 fused_ordering(546) 00:58:14.534 fused_ordering(547) 00:58:14.534 fused_ordering(548) 00:58:14.534 fused_ordering(549) 00:58:14.534 fused_ordering(550) 00:58:14.534 fused_ordering(551) 00:58:14.534 fused_ordering(552) 00:58:14.534 fused_ordering(553) 00:58:14.534 fused_ordering(554) 00:58:14.534 fused_ordering(555) 00:58:14.534 fused_ordering(556) 00:58:14.534 fused_ordering(557) 00:58:14.534 fused_ordering(558) 00:58:14.534 fused_ordering(559) 00:58:14.534 fused_ordering(560) 00:58:14.534 fused_ordering(561) 00:58:14.534 fused_ordering(562) 00:58:14.535 fused_ordering(563) 00:58:14.535 fused_ordering(564) 00:58:14.535 fused_ordering(565) 00:58:14.535 fused_ordering(566) 00:58:14.535 fused_ordering(567) 00:58:14.535 fused_ordering(568) 00:58:14.535 fused_ordering(569) 00:58:14.535 fused_ordering(570) 00:58:14.535 fused_ordering(571) 00:58:14.535 fused_ordering(572) 00:58:14.535 fused_ordering(573) 00:58:14.535 fused_ordering(574) 00:58:14.535 fused_ordering(575) 00:58:14.535 fused_ordering(576) 00:58:14.535 fused_ordering(577) 00:58:14.535 fused_ordering(578) 00:58:14.535 fused_ordering(579) 00:58:14.535 fused_ordering(580) 00:58:14.535 fused_ordering(581) 00:58:14.535 fused_ordering(582) 00:58:14.535 fused_ordering(583) 00:58:14.535 fused_ordering(584) 00:58:14.535 fused_ordering(585) 00:58:14.535 fused_ordering(586) 00:58:14.535 fused_ordering(587) 00:58:14.535 fused_ordering(588) 00:58:14.535 fused_ordering(589) 00:58:14.535 fused_ordering(590) 00:58:14.535 fused_ordering(591) 00:58:14.535 fused_ordering(592) 00:58:14.535 fused_ordering(593) 00:58:14.535 fused_ordering(594) 00:58:14.535 fused_ordering(595) 00:58:14.535 fused_ordering(596) 00:58:14.535 fused_ordering(597) 00:58:14.535 fused_ordering(598) 00:58:14.535 fused_ordering(599) 00:58:14.535 fused_ordering(600) 00:58:14.535 fused_ordering(601) 00:58:14.535 fused_ordering(602) 00:58:14.535 fused_ordering(603) 00:58:14.535 fused_ordering(604) 00:58:14.535 fused_ordering(605) 00:58:14.535 fused_ordering(606) 00:58:14.535 fused_ordering(607) 00:58:14.535 fused_ordering(608) 00:58:14.535 fused_ordering(609) 00:58:14.535 fused_ordering(610) 00:58:14.535 fused_ordering(611) 00:58:14.535 fused_ordering(612) 00:58:14.535 fused_ordering(613) 00:58:14.535 fused_ordering(614) 00:58:14.535 fused_ordering(615) 00:58:14.794 fused_ordering(616) 00:58:14.794 fused_ordering(617) 00:58:14.794 fused_ordering(618) 00:58:14.794 fused_ordering(619) 00:58:14.794 fused_ordering(620) 00:58:14.794 fused_ordering(621) 00:58:14.794 fused_ordering(622) 00:58:14.794 fused_ordering(623) 00:58:14.794 fused_ordering(624) 00:58:14.794 fused_ordering(625) 00:58:14.794 fused_ordering(626) 00:58:14.794 fused_ordering(627) 00:58:14.794 fused_ordering(628) 00:58:14.794 fused_ordering(629) 00:58:14.794 fused_ordering(630) 00:58:14.794 fused_ordering(631) 00:58:14.794 fused_ordering(632) 00:58:14.794 fused_ordering(633) 00:58:14.794 fused_ordering(634) 00:58:14.794 fused_ordering(635) 00:58:14.794 fused_ordering(636) 00:58:14.794 fused_ordering(637) 00:58:14.794 fused_ordering(638) 00:58:14.794 fused_ordering(639) 00:58:14.794 fused_ordering(640) 00:58:14.794 fused_ordering(641) 00:58:14.794 fused_ordering(642) 00:58:14.794 fused_ordering(643) 00:58:14.794 fused_ordering(644) 00:58:14.794 fused_ordering(645) 00:58:14.794 fused_ordering(646) 00:58:14.794 fused_ordering(647) 00:58:14.794 fused_ordering(648) 00:58:14.794 fused_ordering(649) 00:58:14.794 fused_ordering(650) 00:58:14.794 fused_ordering(651) 00:58:14.794 fused_ordering(652) 00:58:14.794 fused_ordering(653) 00:58:14.794 fused_ordering(654) 00:58:14.794 fused_ordering(655) 00:58:14.794 fused_ordering(656) 00:58:14.794 fused_ordering(657) 00:58:14.794 fused_ordering(658) 00:58:14.794 fused_ordering(659) 00:58:14.794 fused_ordering(660) 00:58:14.794 fused_ordering(661) 00:58:14.794 fused_ordering(662) 00:58:14.794 fused_ordering(663) 00:58:14.794 fused_ordering(664) 00:58:14.794 fused_ordering(665) 00:58:14.794 fused_ordering(666) 00:58:14.794 fused_ordering(667) 00:58:14.794 fused_ordering(668) 00:58:14.794 fused_ordering(669) 00:58:14.794 fused_ordering(670) 00:58:14.794 fused_ordering(671) 00:58:14.794 fused_ordering(672) 00:58:14.794 fused_ordering(673) 00:58:14.794 fused_ordering(674) 00:58:14.794 fused_ordering(675) 00:58:14.794 fused_ordering(676) 00:58:14.794 fused_ordering(677) 00:58:14.794 fused_ordering(678) 00:58:14.794 fused_ordering(679) 00:58:14.794 fused_ordering(680) 00:58:14.794 fused_ordering(681) 00:58:14.794 fused_ordering(682) 00:58:14.794 fused_ordering(683) 00:58:14.794 fused_ordering(684) 00:58:14.794 fused_ordering(685) 00:58:14.794 fused_ordering(686) 00:58:14.794 fused_ordering(687) 00:58:14.794 fused_ordering(688) 00:58:14.794 fused_ordering(689) 00:58:14.794 fused_ordering(690) 00:58:14.794 fused_ordering(691) 00:58:14.794 fused_ordering(692) 00:58:14.794 fused_ordering(693) 00:58:14.794 fused_ordering(694) 00:58:14.794 fused_ordering(695) 00:58:14.794 fused_ordering(696) 00:58:14.794 fused_ordering(697) 00:58:14.794 fused_ordering(698) 00:58:14.794 fused_ordering(699) 00:58:14.794 fused_ordering(700) 00:58:14.794 fused_ordering(701) 00:58:14.794 fused_ordering(702) 00:58:14.794 fused_ordering(703) 00:58:14.794 fused_ordering(704) 00:58:14.794 fused_ordering(705) 00:58:14.794 fused_ordering(706) 00:58:14.794 fused_ordering(707) 00:58:14.794 fused_ordering(708) 00:58:14.794 fused_ordering(709) 00:58:14.794 fused_ordering(710) 00:58:14.794 fused_ordering(711) 00:58:14.794 fused_ordering(712) 00:58:14.794 fused_ordering(713) 00:58:14.794 fused_ordering(714) 00:58:14.794 fused_ordering(715) 00:58:14.794 fused_ordering(716) 00:58:14.794 fused_ordering(717) 00:58:14.794 fused_ordering(718) 00:58:14.794 fused_ordering(719) 00:58:14.794 fused_ordering(720) 00:58:14.794 fused_ordering(721) 00:58:14.794 fused_ordering(722) 00:58:14.794 fused_ordering(723) 00:58:14.794 fused_ordering(724) 00:58:14.794 fused_ordering(725) 00:58:14.794 fused_ordering(726) 00:58:14.794 fused_ordering(727) 00:58:14.794 fused_ordering(728) 00:58:14.794 fused_ordering(729) 00:58:14.794 fused_ordering(730) 00:58:14.794 fused_ordering(731) 00:58:14.794 fused_ordering(732) 00:58:14.794 fused_ordering(733) 00:58:14.794 fused_ordering(734) 00:58:14.794 fused_ordering(735) 00:58:14.795 fused_ordering(736) 00:58:14.795 fused_ordering(737) 00:58:14.795 fused_ordering(738) 00:58:14.795 fused_ordering(739) 00:58:14.795 fused_ordering(740) 00:58:14.795 fused_ordering(741) 00:58:14.795 fused_ordering(742) 00:58:14.795 fused_ordering(743) 00:58:14.795 fused_ordering(744) 00:58:14.795 fused_ordering(745) 00:58:14.795 fused_ordering(746) 00:58:14.795 fused_ordering(747) 00:58:14.795 fused_ordering(748) 00:58:14.795 fused_ordering(749) 00:58:14.795 fused_ordering(750) 00:58:14.795 fused_ordering(751) 00:58:14.795 fused_ordering(752) 00:58:14.795 fused_ordering(753) 00:58:14.795 fused_ordering(754) 00:58:14.795 fused_ordering(755) 00:58:14.795 fused_ordering(756) 00:58:14.795 fused_ordering(757) 00:58:14.795 fused_ordering(758) 00:58:14.795 fused_ordering(759) 00:58:14.795 fused_ordering(760) 00:58:14.795 fused_ordering(761) 00:58:14.795 fused_ordering(762) 00:58:14.795 fused_ordering(763) 00:58:14.795 fused_ordering(764) 00:58:14.795 fused_ordering(765) 00:58:14.795 fused_ordering(766) 00:58:14.795 fused_ordering(767) 00:58:14.795 fused_ordering(768) 00:58:14.795 fused_ordering(769) 00:58:14.795 fused_ordering(770) 00:58:14.795 fused_ordering(771) 00:58:14.795 fused_ordering(772) 00:58:14.795 fused_ordering(773) 00:58:14.795 fused_ordering(774) 00:58:14.795 fused_ordering(775) 00:58:14.795 fused_ordering(776) 00:58:14.795 fused_ordering(777) 00:58:14.795 fused_ordering(778) 00:58:14.795 fused_ordering(779) 00:58:14.795 fused_ordering(780) 00:58:14.795 fused_ordering(781) 00:58:14.795 fused_ordering(782) 00:58:14.795 fused_ordering(783) 00:58:14.795 fused_ordering(784) 00:58:14.795 fused_ordering(785) 00:58:14.795 fused_ordering(786) 00:58:14.795 fused_ordering(787) 00:58:14.795 fused_ordering(788) 00:58:14.795 fused_ordering(789) 00:58:14.795 fused_ordering(790) 00:58:14.795 fused_ordering(791) 00:58:14.795 fused_ordering(792) 00:58:14.795 fused_ordering(793) 00:58:14.795 fused_ordering(794) 00:58:14.795 fused_ordering(795) 00:58:14.795 fused_ordering(796) 00:58:14.795 fused_ordering(797) 00:58:14.795 fused_ordering(798) 00:58:14.795 fused_ordering(799) 00:58:14.795 fused_ordering(800) 00:58:14.795 fused_ordering(801) 00:58:14.795 fused_ordering(802) 00:58:14.795 fused_ordering(803) 00:58:14.795 fused_ordering(804) 00:58:14.795 fused_ordering(805) 00:58:14.795 fused_ordering(806) 00:58:14.795 fused_ordering(807) 00:58:14.795 fused_ordering(808) 00:58:14.795 fused_ordering(809) 00:58:14.795 fused_ordering(810) 00:58:14.795 fused_ordering(811) 00:58:14.795 fused_ordering(812) 00:58:14.795 fused_ordering(813) 00:58:14.795 fused_ordering(814) 00:58:14.795 fused_ordering(815) 00:58:14.795 fused_ordering(816) 00:58:14.795 fused_ordering(817) 00:58:14.795 fused_ordering(818) 00:58:14.795 fused_ordering(819) 00:58:14.795 fused_ordering(820) 00:58:15.363 fused_ordering(821) 00:58:15.363 fused_ordering(822) 00:58:15.363 fused_ordering(823) 00:58:15.363 fused_ordering(824) 00:58:15.363 fused_ordering(825) 00:58:15.363 fused_ordering(826) 00:58:15.363 fused_ordering(827) 00:58:15.363 fused_ordering(828) 00:58:15.363 fused_ordering(829) 00:58:15.363 fused_ordering(830) 00:58:15.363 fused_ordering(831) 00:58:15.363 fused_ordering(832) 00:58:15.363 fused_ordering(833) 00:58:15.363 fused_ordering(834) 00:58:15.363 fused_ordering(835) 00:58:15.363 fused_ordering(836) 00:58:15.363 fused_ordering(837) 00:58:15.363 fused_ordering(838) 00:58:15.363 fused_ordering(839) 00:58:15.363 fused_ordering(840) 00:58:15.363 fused_ordering(841) 00:58:15.363 fused_ordering(842) 00:58:15.363 fused_ordering(843) 00:58:15.363 fused_ordering(844) 00:58:15.363 fused_ordering(845) 00:58:15.363 fused_ordering(846) 00:58:15.363 fused_ordering(847) 00:58:15.363 fused_ordering(848) 00:58:15.363 fused_ordering(849) 00:58:15.363 fused_ordering(850) 00:58:15.363 fused_ordering(851) 00:58:15.363 fused_ordering(852) 00:58:15.363 fused_ordering(853) 00:58:15.363 fused_ordering(854) 00:58:15.363 fused_ordering(855) 00:58:15.363 fused_ordering(856) 00:58:15.363 fused_ordering(857) 00:58:15.363 fused_ordering(858) 00:58:15.363 fused_ordering(859) 00:58:15.363 fused_ordering(860) 00:58:15.363 fused_ordering(861) 00:58:15.363 fused_ordering(862) 00:58:15.363 fused_ordering(863) 00:58:15.363 fused_ordering(864) 00:58:15.363 fused_ordering(865) 00:58:15.363 fused_ordering(866) 00:58:15.363 fused_ordering(867) 00:58:15.363 fused_ordering(868) 00:58:15.363 fused_ordering(869) 00:58:15.363 fused_ordering(870) 00:58:15.363 fused_ordering(871) 00:58:15.363 fused_ordering(872) 00:58:15.363 fused_ordering(873) 00:58:15.363 fused_ordering(874) 00:58:15.363 fused_ordering(875) 00:58:15.363 fused_ordering(876) 00:58:15.363 fused_ordering(877) 00:58:15.363 fused_ordering(878) 00:58:15.363 fused_ordering(879) 00:58:15.363 fused_ordering(880) 00:58:15.363 fused_ordering(881) 00:58:15.363 fused_ordering(882) 00:58:15.363 fused_ordering(883) 00:58:15.363 fused_ordering(884) 00:58:15.363 fused_ordering(885) 00:58:15.363 fused_ordering(886) 00:58:15.363 fused_ordering(887) 00:58:15.363 fused_ordering(888) 00:58:15.363 fused_ordering(889) 00:58:15.363 fused_ordering(890) 00:58:15.363 fused_ordering(891) 00:58:15.363 fused_ordering(892) 00:58:15.363 fused_ordering(893) 00:58:15.363 fused_ordering(894) 00:58:15.363 fused_ordering(895) 00:58:15.363 fused_ordering(896) 00:58:15.363 fused_ordering(897) 00:58:15.363 fused_ordering(898) 00:58:15.363 fused_ordering(899) 00:58:15.363 fused_ordering(900) 00:58:15.363 fused_ordering(901) 00:58:15.363 fused_ordering(902) 00:58:15.363 fused_ordering(903) 00:58:15.363 fused_ordering(904) 00:58:15.363 fused_ordering(905) 00:58:15.363 fused_ordering(906) 00:58:15.363 fused_ordering(907) 00:58:15.363 fused_ordering(908) 00:58:15.363 fused_ordering(909) 00:58:15.363 fused_ordering(910) 00:58:15.364 fused_ordering(911) 00:58:15.364 fused_ordering(912) 00:58:15.364 fused_ordering(913) 00:58:15.364 fused_ordering(914) 00:58:15.364 fused_ordering(915) 00:58:15.364 fused_ordering(916) 00:58:15.364 fused_ordering(917) 00:58:15.364 fused_ordering(918) 00:58:15.364 fused_ordering(919) 00:58:15.364 fused_ordering(920) 00:58:15.364 fused_ordering(921) 00:58:15.364 fused_ordering(922) 00:58:15.364 fused_ordering(923) 00:58:15.364 fused_ordering(924) 00:58:15.364 fused_ordering(925) 00:58:15.364 fused_ordering(926) 00:58:15.364 fused_ordering(927) 00:58:15.364 fused_ordering(928) 00:58:15.364 fused_ordering(929) 00:58:15.364 fused_ordering(930) 00:58:15.364 fused_ordering(931) 00:58:15.364 fused_ordering(932) 00:58:15.364 fused_ordering(933) 00:58:15.364 fused_ordering(934) 00:58:15.364 fused_ordering(935) 00:58:15.364 fused_ordering(936) 00:58:15.364 fused_ordering(937) 00:58:15.364 fused_ordering(938) 00:58:15.364 fused_ordering(939) 00:58:15.364 fused_ordering(940) 00:58:15.364 fused_ordering(941) 00:58:15.364 fused_ordering(942) 00:58:15.364 fused_ordering(943) 00:58:15.364 fused_ordering(944) 00:58:15.364 fused_ordering(945) 00:58:15.364 fused_ordering(946) 00:58:15.364 fused_ordering(947) 00:58:15.364 fused_ordering(948) 00:58:15.364 fused_ordering(949) 00:58:15.364 fused_ordering(950) 00:58:15.364 fused_ordering(951) 00:58:15.364 fused_ordering(952) 00:58:15.364 fused_ordering(953) 00:58:15.364 fused_ordering(954) 00:58:15.364 fused_ordering(955) 00:58:15.364 fused_ordering(956) 00:58:15.364 fused_ordering(957) 00:58:15.364 fused_ordering(958) 00:58:15.364 fused_ordering(959) 00:58:15.364 fused_ordering(960) 00:58:15.364 fused_ordering(961) 00:58:15.364 fused_ordering(962) 00:58:15.364 fused_ordering(963) 00:58:15.364 fused_ordering(964) 00:58:15.364 fused_ordering(965) 00:58:15.364 fused_ordering(966) 00:58:15.364 fused_ordering(967) 00:58:15.364 fused_ordering(968) 00:58:15.364 fused_ordering(969) 00:58:15.364 fused_ordering(970) 00:58:15.364 fused_ordering(971) 00:58:15.364 fused_ordering(972) 00:58:15.364 fused_ordering(973) 00:58:15.364 fused_ordering(974) 00:58:15.364 fused_ordering(975) 00:58:15.364 fused_ordering(976) 00:58:15.364 fused_ordering(977) 00:58:15.364 fused_ordering(978) 00:58:15.364 fused_ordering(979) 00:58:15.364 fused_ordering(980) 00:58:15.364 fused_ordering(981) 00:58:15.364 fused_ordering(982) 00:58:15.364 fused_ordering(983) 00:58:15.364 fused_ordering(984) 00:58:15.364 fused_ordering(985) 00:58:15.364 fused_ordering(986) 00:58:15.364 fused_ordering(987) 00:58:15.364 fused_ordering(988) 00:58:15.364 fused_ordering(989) 00:58:15.364 fused_ordering(990) 00:58:15.364 fused_ordering(991) 00:58:15.364 fused_ordering(992) 00:58:15.364 fused_ordering(993) 00:58:15.364 fused_ordering(994) 00:58:15.364 fused_ordering(995) 00:58:15.364 fused_ordering(996) 00:58:15.364 fused_ordering(997) 00:58:15.364 fused_ordering(998) 00:58:15.364 fused_ordering(999) 00:58:15.364 fused_ordering(1000) 00:58:15.364 fused_ordering(1001) 00:58:15.364 fused_ordering(1002) 00:58:15.364 fused_ordering(1003) 00:58:15.364 fused_ordering(1004) 00:58:15.364 fused_ordering(1005) 00:58:15.364 fused_ordering(1006) 00:58:15.364 fused_ordering(1007) 00:58:15.364 fused_ordering(1008) 00:58:15.364 fused_ordering(1009) 00:58:15.364 fused_ordering(1010) 00:58:15.364 fused_ordering(1011) 00:58:15.364 fused_ordering(1012) 00:58:15.364 fused_ordering(1013) 00:58:15.364 fused_ordering(1014) 00:58:15.364 fused_ordering(1015) 00:58:15.364 fused_ordering(1016) 00:58:15.364 fused_ordering(1017) 00:58:15.364 fused_ordering(1018) 00:58:15.364 fused_ordering(1019) 00:58:15.364 fused_ordering(1020) 00:58:15.364 fused_ordering(1021) 00:58:15.364 fused_ordering(1022) 00:58:15.364 fused_ordering(1023) 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:15.364 rmmod nvme_tcp 00:58:15.364 rmmod nvme_fabrics 00:58:15.364 rmmod nvme_keyring 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 74862 ']' 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 74862 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 74862 ']' 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 74862 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:15.364 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74862 00:58:15.623 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:58:15.623 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:58:15.623 killing process with pid 74862 00:58:15.623 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74862' 00:58:15.623 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 74862 00:58:15.623 05:57:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 74862 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:58:15.623 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:58:15.882 00:58:15.882 real 0m3.357s 00:58:15.882 user 0m3.733s 00:58:15.882 sys 0m1.275s 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:58:15.882 ************************************ 00:58:15.882 END TEST nvmf_fused_ordering 00:58:15.882 ************************************ 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:58:15.882 ************************************ 00:58:15.882 START TEST nvmf_ns_masking 00:58:15.882 ************************************ 00:58:15.882 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:58:16.142 * Looking for test storage... 00:58:16.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:16.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:16.142 --rc genhtml_branch_coverage=1 00:58:16.142 --rc genhtml_function_coverage=1 00:58:16.142 --rc genhtml_legend=1 00:58:16.142 --rc geninfo_all_blocks=1 00:58:16.142 --rc geninfo_unexecuted_blocks=1 00:58:16.142 00:58:16.142 ' 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:16.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:16.142 --rc genhtml_branch_coverage=1 00:58:16.142 --rc genhtml_function_coverage=1 00:58:16.142 --rc genhtml_legend=1 00:58:16.142 --rc geninfo_all_blocks=1 00:58:16.142 --rc geninfo_unexecuted_blocks=1 00:58:16.142 00:58:16.142 ' 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:16.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:16.142 --rc genhtml_branch_coverage=1 00:58:16.142 --rc genhtml_function_coverage=1 00:58:16.142 --rc genhtml_legend=1 00:58:16.142 --rc geninfo_all_blocks=1 00:58:16.142 --rc geninfo_unexecuted_blocks=1 00:58:16.142 00:58:16.142 ' 00:58:16.142 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:16.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:16.142 --rc genhtml_branch_coverage=1 00:58:16.142 --rc genhtml_function_coverage=1 00:58:16.142 --rc genhtml_legend=1 00:58:16.142 --rc geninfo_all_blocks=1 00:58:16.142 --rc geninfo_unexecuted_blocks=1 00:58:16.142 00:58:16.142 ' 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:16.143 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=722a9f8f-29eb-4cad-b8ba-fcc823d945ee 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c4f03435-29cd-4f34-82f0-632e014d8bed 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b959cee2-6dd5-49a2-b662-5e69fff3d823 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:58:16.143 Cannot find device "nvmf_init_br" 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:58:16.143 Cannot find device "nvmf_init_br2" 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:58:16.143 Cannot find device "nvmf_tgt_br" 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:58:16.143 Cannot find device "nvmf_tgt_br2" 00:58:16.143 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:58:16.144 Cannot find device "nvmf_init_br" 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:58:16.144 Cannot find device "nvmf_init_br2" 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:58:16.144 Cannot find device "nvmf_tgt_br" 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:58:16.144 Cannot find device "nvmf_tgt_br2" 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:58:16.144 Cannot find device "nvmf_br" 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:58:16.144 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:58:16.403 Cannot find device "nvmf_init_if" 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:58:16.403 Cannot find device "nvmf_init_if2" 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:16.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:16.403 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:58:16.403 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:58:16.662 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:58:16.662 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:58:16.662 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:58:16.662 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:58:16.662 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:58:16.662 00:58:16.662 --- 10.0.0.3 ping statistics --- 00:58:16.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:16.662 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:58:16.662 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:58:16.662 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:58:16.662 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:58:16.662 00:58:16.662 --- 10.0.0.4 ping statistics --- 00:58:16.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:16.662 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:58:16.662 05:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:58:16.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:16.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:58:16.662 00:58:16.662 --- 10.0.0.1 ping statistics --- 00:58:16.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:16.662 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:58:16.662 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:58:16.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:16.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:58:16.663 00:58:16.663 --- 10.0.0.2 ping statistics --- 00:58:16.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:16.663 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:58:16.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75140 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75140 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75140 ']' 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:16.663 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:58:16.663 [2024-12-09 05:57:11.105473] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:58:16.663 [2024-12-09 05:57:11.105568] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:16.923 [2024-12-09 05:57:11.257708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:16.923 [2024-12-09 05:57:11.294801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:16.923 [2024-12-09 05:57:11.295024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:16.923 [2024-12-09 05:57:11.295049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:16.923 [2024-12-09 05:57:11.295060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:16.923 [2024-12-09 05:57:11.295069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:16.923 [2024-12-09 05:57:11.295440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:58:16.923 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:16.923 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:58:16.923 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:16.923 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:16.923 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:58:16.923 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:16.923 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:58:17.182 [2024-12-09 05:57:11.722292] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:17.182 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:58:17.182 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:58:17.182 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:58:17.442 Malloc1 00:58:17.442 05:57:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:58:17.701 Malloc2 00:58:17.701 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:58:17.960 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:58:18.219 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:58:18.477 [2024-12-09 05:57:12.920836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:58:18.477 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:58:18.477 05:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b959cee2-6dd5-49a2-b662-5e69fff3d823 -a 10.0.0.3 -s 4420 -i 4 00:58:18.477 05:57:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:58:18.477 05:57:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:58:18.477 05:57:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:58:18.477 05:57:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:58:18.477 05:57:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:58:21.010 [ 0]:0x1 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c67904ecc1cb4f1c810f7a6ad4c79e05 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c67904ecc1cb4f1c810f7a6ad4c79e05 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:58:21.010 [ 0]:0x1 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c67904ecc1cb4f1c810f7a6ad4c79e05 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c67904ecc1cb4f1c810f7a6ad4c79e05 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:21.010 [ 1]:0x2 00:58:21.010 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:58:21.011 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:21.268 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37850f47f4374536b06e10e67ba5a08e 00:58:21.268 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37850f47f4374536b06e10e67ba5a08e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:21.268 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:58:21.268 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:58:21.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:58:21.268 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:58:21.525 05:57:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:58:21.783 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:58:21.783 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b959cee2-6dd5-49a2-b662-5e69fff3d823 -a 10.0.0.3 -s 4420 -i 4 00:58:21.783 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:58:21.783 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:58:21.783 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:58:21.783 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:58:21.783 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:58:21.783 05:57:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:58:23.732 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:58:23.990 [ 0]:0x2 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37850f47f4374536b06e10e67ba5a08e 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37850f47f4374536b06e10e67ba5a08e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:23.990 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:58:24.249 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:58:24.249 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:24.249 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:58:24.249 [ 0]:0x1 00:58:24.249 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:24.249 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c67904ecc1cb4f1c810f7a6ad4c79e05 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c67904ecc1cb4f1c810f7a6ad4c79e05 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:58:24.507 [ 1]:0x2 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37850f47f4374536b06e10e67ba5a08e 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37850f47f4374536b06e10e67ba5a08e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:24.507 05:57:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:58:24.765 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:24.765 [ 0]:0x2 00:58:24.766 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:24.766 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:58:24.766 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37850f47f4374536b06e10e67ba5a08e 00:58:24.766 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37850f47f4374536b06e10e67ba5a08e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:24.766 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:58:24.766 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:58:24.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:58:24.766 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:58:25.330 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:58:25.330 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b959cee2-6dd5-49a2-b662-5e69fff3d823 -a 10.0.0.3 -s 4420 -i 4 00:58:25.330 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:58:25.330 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:58:25.330 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:58:25.330 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:58:25.330 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:58:25.330 05:57:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:58:27.235 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:58:27.235 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:58:27.235 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:58:27.235 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:58:27.235 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:58:27.235 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:58:27.235 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:58:27.235 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:58:27.495 [ 0]:0x1 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c67904ecc1cb4f1c810f7a6ad4c79e05 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c67904ecc1cb4f1c810f7a6ad4c79e05 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:58:27.495 [ 1]:0x2 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37850f47f4374536b06e10e67ba5a08e 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37850f47f4374536b06e10e67ba5a08e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:27.495 05:57:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:58:27.754 [ 0]:0x2 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:58:27.754 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37850f47f4374536b06e10e67ba5a08e 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37850f47f4374536b06e10e67ba5a08e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:58:28.013 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:58:28.013 [2024-12-09 05:57:22.585845] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:58:28.013 2024/12/09 05:57:22 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:58:28.013 request: 00:58:28.013 { 00:58:28.013 "method": "nvmf_ns_remove_host", 00:58:28.013 "params": { 00:58:28.013 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:58:28.013 "nsid": 2, 00:58:28.013 "host": "nqn.2016-06.io.spdk:host1" 00:58:28.013 } 00:58:28.013 } 00:58:28.013 Got JSON-RPC error response 00:58:28.014 GoRPCClient: error on JSON-RPC call 00:58:28.272 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:58:28.272 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:58:28.272 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:58:28.272 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:58:28.272 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:58:28.272 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:58:28.273 [ 0]:0x2 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=37850f47f4374536b06e10e67ba5a08e 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 37850f47f4374536b06e10e67ba5a08e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:58:28.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=75502 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 75502 /var/tmp/host.sock 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75502 ']' 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:28.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:28.273 05:57:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:58:28.273 [2024-12-09 05:57:22.839135] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 00:58:28.273 [2024-12-09 05:57:22.839214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75502 ] 00:58:28.533 [2024-12-09 05:57:22.986432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:28.533 [2024-12-09 05:57:23.024784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:58:28.792 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:28.792 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:58:28.792 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:58:29.051 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:58:29.309 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 722a9f8f-29eb-4cad-b8ba-fcc823d945ee 00:58:29.309 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:58:29.309 05:57:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 722A9F8F29EB4CADB8BAFCC823D945EE -i 00:58:29.568 05:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c4f03435-29cd-4f34-82f0-632e014d8bed 00:58:29.568 05:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:58:29.568 05:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C4F0343529CD4F3482F0632E014D8BED -i 00:58:29.827 05:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:58:30.087 05:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:58:30.346 05:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:58:30.346 05:57:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:58:30.617 nvme0n1 00:58:30.617 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:58:30.617 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:58:31.184 nvme1n2 00:58:31.184 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:58:31.184 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:58:31.184 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:58:31.184 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:58:31.184 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:58:31.441 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:58:31.441 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:58:31.441 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:58:31.441 05:57:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:58:31.699 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 722a9f8f-29eb-4cad-b8ba-fcc823d945ee == \7\2\2\a\9\f\8\f\-\2\9\e\b\-\4\c\a\d\-\b\8\b\a\-\f\c\c\8\2\3\d\9\4\5\e\e ]] 00:58:31.699 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:58:31.699 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:58:31.699 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:58:31.958 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c4f03435-29cd-4f34-82f0-632e014d8bed == \c\4\f\0\3\4\3\5\-\2\9\c\d\-\4\f\3\4\-\8\2\f\0\-\6\3\2\e\0\1\4\d\8\b\e\d ]] 00:58:31.958 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:58:32.216 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 722a9f8f-29eb-4cad-b8ba-fcc823d945ee 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 722A9F8F29EB4CADB8BAFCC823D945EE 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 722A9F8F29EB4CADB8BAFCC823D945EE 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:58:32.475 05:57:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 722A9F8F29EB4CADB8BAFCC823D945EE 00:58:32.734 [2024-12-09 05:57:27.136936] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:58:32.734 [2024-12-09 05:57:27.136997] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:58:32.734 [2024-12-09 05:57:27.137009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:58:32.734 2024/12/09 05:57:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:722A9F8F29EB4CADB8BAFCC823D945EE no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:58:32.734 request: 00:58:32.734 { 00:58:32.734 "method": "nvmf_subsystem_add_ns", 00:58:32.734 "params": { 00:58:32.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:58:32.734 "namespace": { 00:58:32.734 "bdev_name": "invalid", 00:58:32.734 "nsid": 1, 00:58:32.734 "nguid": "722A9F8F29EB4CADB8BAFCC823D945EE", 00:58:32.734 "no_auto_visible": false, 00:58:32.734 "hide_metadata": false 00:58:32.734 } 00:58:32.734 } 00:58:32.734 } 00:58:32.734 Got JSON-RPC error response 00:58:32.734 GoRPCClient: error on JSON-RPC call 00:58:32.734 05:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:58:32.734 05:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:58:32.734 05:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:58:32.734 05:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:58:32.734 05:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 722a9f8f-29eb-4cad-b8ba-fcc823d945ee 00:58:32.734 05:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:58:32.734 05:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 722A9F8F29EB4CADB8BAFCC823D945EE -i 00:58:32.993 05:57:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:58:34.899 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:58:34.899 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:58:34.899 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 75502 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75502 ']' 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75502 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75502 00:58:35.475 killing process with pid 75502 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75502' 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75502 00:58:35.475 05:57:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75502 00:58:35.475 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:58:35.734 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:58:35.734 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:58:35.735 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:58:35.735 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:58:35.735 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:58:35.735 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:58:35.735 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:58:35.735 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:58:35.735 rmmod nvme_tcp 00:58:35.735 rmmod nvme_fabrics 00:58:35.993 rmmod nvme_keyring 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75140 ']' 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75140 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75140 ']' 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75140 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:58:35.993 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75140 00:58:35.993 killing process with pid 75140 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75140' 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75140 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75140 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:58:35.994 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:58:36.253 00:58:36.253 real 0m20.410s 00:58:36.253 user 0m34.527s 00:58:36.253 sys 0m2.957s 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:58:36.253 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:58:36.253 ************************************ 00:58:36.253 END TEST nvmf_ns_masking 00:58:36.253 ************************************ 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:58:36.514 ************************************ 00:58:36.514 START TEST nvmf_auth_target 00:58:36.514 ************************************ 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:58:36.514 * Looking for test storage... 00:58:36.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:58:36.514 05:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:58:36.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:36.514 --rc genhtml_branch_coverage=1 00:58:36.514 --rc genhtml_function_coverage=1 00:58:36.514 --rc genhtml_legend=1 00:58:36.514 --rc geninfo_all_blocks=1 00:58:36.514 --rc geninfo_unexecuted_blocks=1 00:58:36.514 00:58:36.514 ' 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:58:36.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:36.514 --rc genhtml_branch_coverage=1 00:58:36.514 --rc genhtml_function_coverage=1 00:58:36.514 --rc genhtml_legend=1 00:58:36.514 --rc geninfo_all_blocks=1 00:58:36.514 --rc geninfo_unexecuted_blocks=1 00:58:36.514 00:58:36.514 ' 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:58:36.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:36.514 --rc genhtml_branch_coverage=1 00:58:36.514 --rc genhtml_function_coverage=1 00:58:36.514 --rc genhtml_legend=1 00:58:36.514 --rc geninfo_all_blocks=1 00:58:36.514 --rc geninfo_unexecuted_blocks=1 00:58:36.514 00:58:36.514 ' 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:58:36.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:58:36.514 --rc genhtml_branch_coverage=1 00:58:36.514 --rc genhtml_function_coverage=1 00:58:36.514 --rc genhtml_legend=1 00:58:36.514 --rc geninfo_all_blocks=1 00:58:36.514 --rc geninfo_unexecuted_blocks=1 00:58:36.514 00:58:36.514 ' 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:36.514 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:58:36.515 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:58:36.515 Cannot find device "nvmf_init_br" 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:58:36.515 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:58:36.773 Cannot find device "nvmf_init_br2" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:58:36.773 Cannot find device "nvmf_tgt_br" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:58:36.773 Cannot find device "nvmf_tgt_br2" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:58:36.773 Cannot find device "nvmf_init_br" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:58:36.773 Cannot find device "nvmf_init_br2" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:58:36.773 Cannot find device "nvmf_tgt_br" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:58:36.773 Cannot find device "nvmf_tgt_br2" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:58:36.773 Cannot find device "nvmf_br" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:58:36.773 Cannot find device "nvmf_init_if" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:58:36.773 Cannot find device "nvmf_init_if2" 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:58:36.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:58:36.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:58:36.773 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:58:37.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:58:37.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:58:37.031 00:58:37.031 --- 10.0.0.3 ping statistics --- 00:58:37.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:37.031 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:58:37.031 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:58:37.031 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:58:37.031 00:58:37.031 --- 10.0.0.4 ping statistics --- 00:58:37.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:37.031 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:58:37.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:37.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:58:37.031 00:58:37.031 --- 10.0.0.1 ping statistics --- 00:58:37.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:37.031 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:58:37.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:37.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:58:37.031 00:58:37.031 --- 10.0.0.2 ping statistics --- 00:58:37.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:37.031 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=75982 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 75982 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75982 ']' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:37.031 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:37.290 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:37.290 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:58:37.290 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:58:37.290 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:58:37.290 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:37.290 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76007 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=822757afc3a75ca2ebfc0459d9be09631910ed6c81e915d6 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aHB 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 822757afc3a75ca2ebfc0459d9be09631910ed6c81e915d6 0 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 822757afc3a75ca2ebfc0459d9be09631910ed6c81e915d6 0 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=822757afc3a75ca2ebfc0459d9be09631910ed6c81e915d6 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aHB 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aHB 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.aHB 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=28765b2e63adc0f4df0a4f433212d17b52c1dc34dc064c190d53d6e86c212c2d 00:58:37.549 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:58:37.550 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7fi 00:58:37.550 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 28765b2e63adc0f4df0a4f433212d17b52c1dc34dc064c190d53d6e86c212c2d 3 00:58:37.550 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 28765b2e63adc0f4df0a4f433212d17b52c1dc34dc064c190d53d6e86c212c2d 3 00:58:37.550 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:58:37.550 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:58:37.550 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=28765b2e63adc0f4df0a4f433212d17b52c1dc34dc064c190d53d6e86c212c2d 00:58:37.550 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:58:37.550 05:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7fi 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7fi 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.7fi 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a9e5abc587e5510e072cd304bbd819d1 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nUT 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a9e5abc587e5510e072cd304bbd819d1 1 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a9e5abc587e5510e072cd304bbd819d1 1 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a9e5abc587e5510e072cd304bbd819d1 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nUT 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nUT 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.nUT 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5269e4658c2d7cefeb26017775f3bee2ee53c56d285a3192 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.se5 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5269e4658c2d7cefeb26017775f3bee2ee53c56d285a3192 2 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5269e4658c2d7cefeb26017775f3bee2ee53c56d285a3192 2 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5269e4658c2d7cefeb26017775f3bee2ee53c56d285a3192 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:58:37.550 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.se5 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.se5 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.se5 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=304afaa6eb6be3ee6519d2990fb73f87bec30cbf9f208e73 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ib1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 304afaa6eb6be3ee6519d2990fb73f87bec30cbf9f208e73 2 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 304afaa6eb6be3ee6519d2990fb73f87bec30cbf9f208e73 2 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=304afaa6eb6be3ee6519d2990fb73f87bec30cbf9f208e73 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ib1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ib1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ib1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=092f892423e939d70060bb8971e883f0 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.i3i 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 092f892423e939d70060bb8971e883f0 1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 092f892423e939d70060bb8971e883f0 1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=092f892423e939d70060bb8971e883f0 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.i3i 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.i3i 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.i3i 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4aa41613a25a6e138fe3b0a05c807c201ed5b1a8081aa716bc31a50084634d74 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8gG 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4aa41613a25a6e138fe3b0a05c807c201ed5b1a8081aa716bc31a50084634d74 3 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4aa41613a25a6e138fe3b0a05c807c201ed5b1a8081aa716bc31a50084634d74 3 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4aa41613a25a6e138fe3b0a05c807c201ed5b1a8081aa716bc31a50084634d74 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:58:37.809 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8gG 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8gG 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8gG 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 75982 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75982 ']' 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:37.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:37.810 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76007 /var/tmp/host.sock 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76007 ']' 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:58:38.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aHB 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aHB 00:58:38.390 05:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aHB 00:58:38.957 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.7fi ]] 00:58:38.957 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7fi 00:58:38.957 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:38.957 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.957 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:38.957 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7fi 00:58:38.957 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7fi 00:58:38.958 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:58:38.958 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nUT 00:58:38.958 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:38.958 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:38.958 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:38.958 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nUT 00:58:38.958 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nUT 00:58:39.216 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.se5 ]] 00:58:39.216 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.se5 00:58:39.216 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:39.216 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:39.216 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:39.216 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.se5 00:58:39.216 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.se5 00:58:39.474 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:58:39.474 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ib1 00:58:39.474 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:39.474 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:39.474 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:39.474 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ib1 00:58:39.474 05:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ib1 00:58:39.733 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.i3i ]] 00:58:39.733 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.i3i 00:58:39.733 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:39.733 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:39.733 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:39.733 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.i3i 00:58:39.733 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.i3i 00:58:39.992 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:58:39.992 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8gG 00:58:39.992 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:39.992 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:39.992 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:39.992 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8gG 00:58:39.992 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8gG 00:58:40.251 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:58:40.251 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:58:40.251 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:58:40.251 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:40.251 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:58:40.251 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:40.510 05:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:40.773 00:58:40.773 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:40.773 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:40.773 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:41.033 { 00:58:41.033 "auth": { 00:58:41.033 "dhgroup": "null", 00:58:41.033 "digest": "sha256", 00:58:41.033 "state": "completed" 00:58:41.033 }, 00:58:41.033 "cntlid": 1, 00:58:41.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:41.033 "listen_address": { 00:58:41.033 "adrfam": "IPv4", 00:58:41.033 "traddr": "10.0.0.3", 00:58:41.033 "trsvcid": "4420", 00:58:41.033 "trtype": "TCP" 00:58:41.033 }, 00:58:41.033 "peer_address": { 00:58:41.033 "adrfam": "IPv4", 00:58:41.033 "traddr": "10.0.0.1", 00:58:41.033 "trsvcid": "58394", 00:58:41.033 "trtype": "TCP" 00:58:41.033 }, 00:58:41.033 "qid": 0, 00:58:41.033 "state": "enabled", 00:58:41.033 "thread": "nvmf_tgt_poll_group_000" 00:58:41.033 } 00:58:41.033 ]' 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:58:41.033 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:41.291 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:41.291 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:41.291 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:41.549 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:58:41.549 05:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:58:45.738 05:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:45.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:45.738 05:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:45.738 05:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:45.738 05:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:45.738 05:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:45.738 05:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:45.738 05:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:58:45.738 05:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:45.738 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:45.997 00:58:45.998 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:45.998 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:45.998 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:46.257 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:46.257 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:46.257 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:46.257 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:46.257 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:46.257 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:46.257 { 00:58:46.257 "auth": { 00:58:46.257 "dhgroup": "null", 00:58:46.257 "digest": "sha256", 00:58:46.257 "state": "completed" 00:58:46.257 }, 00:58:46.257 "cntlid": 3, 00:58:46.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:46.257 "listen_address": { 00:58:46.257 "adrfam": "IPv4", 00:58:46.257 "traddr": "10.0.0.3", 00:58:46.257 "trsvcid": "4420", 00:58:46.257 "trtype": "TCP" 00:58:46.257 }, 00:58:46.257 "peer_address": { 00:58:46.257 "adrfam": "IPv4", 00:58:46.257 "traddr": "10.0.0.1", 00:58:46.257 "trsvcid": "58420", 00:58:46.257 "trtype": "TCP" 00:58:46.257 }, 00:58:46.258 "qid": 0, 00:58:46.258 "state": "enabled", 00:58:46.258 "thread": "nvmf_tgt_poll_group_000" 00:58:46.258 } 00:58:46.258 ]' 00:58:46.258 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:46.517 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:46.517 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:46.517 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:58:46.517 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:46.517 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:46.517 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:46.517 05:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:46.776 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:58:46.776 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:58:47.442 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:47.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:47.443 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:47.443 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:47.443 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:47.443 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:47.443 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:47.443 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:58:47.443 05:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:47.714 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:47.993 00:58:47.993 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:47.993 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:47.993 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:48.252 { 00:58:48.252 "auth": { 00:58:48.252 "dhgroup": "null", 00:58:48.252 "digest": "sha256", 00:58:48.252 "state": "completed" 00:58:48.252 }, 00:58:48.252 "cntlid": 5, 00:58:48.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:48.252 "listen_address": { 00:58:48.252 "adrfam": "IPv4", 00:58:48.252 "traddr": "10.0.0.3", 00:58:48.252 "trsvcid": "4420", 00:58:48.252 "trtype": "TCP" 00:58:48.252 }, 00:58:48.252 "peer_address": { 00:58:48.252 "adrfam": "IPv4", 00:58:48.252 "traddr": "10.0.0.1", 00:58:48.252 "trsvcid": "58456", 00:58:48.252 "trtype": "TCP" 00:58:48.252 }, 00:58:48.252 "qid": 0, 00:58:48.252 "state": "enabled", 00:58:48.252 "thread": "nvmf_tgt_poll_group_000" 00:58:48.252 } 00:58:48.252 ]' 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:48.252 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:48.510 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:58:48.510 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:48.510 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:48.510 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:48.510 05:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:48.768 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:58:48.768 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:58:49.335 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:49.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:49.335 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:49.335 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:49.335 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:49.335 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:49.335 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:49.335 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:58:49.335 05:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:58:49.594 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:58:49.906 00:58:49.906 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:49.906 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:49.906 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:50.164 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:50.164 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:50.164 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:50.164 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:50.164 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:50.164 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:50.164 { 00:58:50.164 "auth": { 00:58:50.164 "dhgroup": "null", 00:58:50.164 "digest": "sha256", 00:58:50.164 "state": "completed" 00:58:50.164 }, 00:58:50.164 "cntlid": 7, 00:58:50.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:50.164 "listen_address": { 00:58:50.164 "adrfam": "IPv4", 00:58:50.164 "traddr": "10.0.0.3", 00:58:50.164 "trsvcid": "4420", 00:58:50.164 "trtype": "TCP" 00:58:50.164 }, 00:58:50.164 "peer_address": { 00:58:50.164 "adrfam": "IPv4", 00:58:50.164 "traddr": "10.0.0.1", 00:58:50.164 "trsvcid": "58488", 00:58:50.164 "trtype": "TCP" 00:58:50.164 }, 00:58:50.165 "qid": 0, 00:58:50.165 "state": "enabled", 00:58:50.165 "thread": "nvmf_tgt_poll_group_000" 00:58:50.165 } 00:58:50.165 ]' 00:58:50.165 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:50.165 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:50.165 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:50.165 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:58:50.165 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:50.423 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:50.423 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:50.423 05:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:50.691 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:58:50.691 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:51.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:58:51.259 05:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:51.516 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:52.083 00:58:52.083 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:52.083 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:52.083 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:52.083 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:52.083 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:52.083 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:52.083 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:52.342 { 00:58:52.342 "auth": { 00:58:52.342 "dhgroup": "ffdhe2048", 00:58:52.342 "digest": "sha256", 00:58:52.342 "state": "completed" 00:58:52.342 }, 00:58:52.342 "cntlid": 9, 00:58:52.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:52.342 "listen_address": { 00:58:52.342 "adrfam": "IPv4", 00:58:52.342 "traddr": "10.0.0.3", 00:58:52.342 "trsvcid": "4420", 00:58:52.342 "trtype": "TCP" 00:58:52.342 }, 00:58:52.342 "peer_address": { 00:58:52.342 "adrfam": "IPv4", 00:58:52.342 "traddr": "10.0.0.1", 00:58:52.342 "trsvcid": "34288", 00:58:52.342 "trtype": "TCP" 00:58:52.342 }, 00:58:52.342 "qid": 0, 00:58:52.342 "state": "enabled", 00:58:52.342 "thread": "nvmf_tgt_poll_group_000" 00:58:52.342 } 00:58:52.342 ]' 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:52.342 05:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:52.600 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:58:52.600 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:58:53.167 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:53.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:53.167 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:53.167 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:53.167 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:53.167 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:53.167 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:53.167 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:58:53.167 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:53.426 05:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:58:53.993 00:58:53.993 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:53.993 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:53.993 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:54.252 { 00:58:54.252 "auth": { 00:58:54.252 "dhgroup": "ffdhe2048", 00:58:54.252 "digest": "sha256", 00:58:54.252 "state": "completed" 00:58:54.252 }, 00:58:54.252 "cntlid": 11, 00:58:54.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:54.252 "listen_address": { 00:58:54.252 "adrfam": "IPv4", 00:58:54.252 "traddr": "10.0.0.3", 00:58:54.252 "trsvcid": "4420", 00:58:54.252 "trtype": "TCP" 00:58:54.252 }, 00:58:54.252 "peer_address": { 00:58:54.252 "adrfam": "IPv4", 00:58:54.252 "traddr": "10.0.0.1", 00:58:54.252 "trsvcid": "34318", 00:58:54.252 "trtype": "TCP" 00:58:54.252 }, 00:58:54.252 "qid": 0, 00:58:54.252 "state": "enabled", 00:58:54.252 "thread": "nvmf_tgt_poll_group_000" 00:58:54.252 } 00:58:54.252 ]' 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:54.252 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:54.512 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:58:54.512 05:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:58:55.080 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:55.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:55.080 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:55.080 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:55.080 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:55.080 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:55.080 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:55.080 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:58:55.080 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:58:55.339 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:58:55.339 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:55.339 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:55.340 05:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:58:55.599 00:58:55.599 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:55.599 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:55.599 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:55.858 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:55.858 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:55.858 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:55.858 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:55.858 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:55.858 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:55.858 { 00:58:55.858 "auth": { 00:58:55.858 "dhgroup": "ffdhe2048", 00:58:55.858 "digest": "sha256", 00:58:55.858 "state": "completed" 00:58:55.858 }, 00:58:55.858 "cntlid": 13, 00:58:55.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:55.858 "listen_address": { 00:58:55.858 "adrfam": "IPv4", 00:58:55.858 "traddr": "10.0.0.3", 00:58:55.858 "trsvcid": "4420", 00:58:55.858 "trtype": "TCP" 00:58:55.858 }, 00:58:55.858 "peer_address": { 00:58:55.858 "adrfam": "IPv4", 00:58:55.858 "traddr": "10.0.0.1", 00:58:55.858 "trsvcid": "34346", 00:58:55.858 "trtype": "TCP" 00:58:55.858 }, 00:58:55.858 "qid": 0, 00:58:55.858 "state": "enabled", 00:58:55.858 "thread": "nvmf_tgt_poll_group_000" 00:58:55.858 } 00:58:55.858 ]' 00:58:55.858 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:56.116 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:56.116 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:56.116 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:56.116 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:56.116 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:56.116 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:56.116 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:56.375 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:58:56.375 05:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:58:56.947 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:56.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:56.947 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:56.947 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:56.947 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:56.947 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:56.947 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:56.947 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:58:56.947 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:58:57.205 05:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:58:57.772 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:57.772 { 00:58:57.772 "auth": { 00:58:57.772 "dhgroup": "ffdhe2048", 00:58:57.772 "digest": "sha256", 00:58:57.772 "state": "completed" 00:58:57.772 }, 00:58:57.772 "cntlid": 15, 00:58:57.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:57.772 "listen_address": { 00:58:57.772 "adrfam": "IPv4", 00:58:57.772 "traddr": "10.0.0.3", 00:58:57.772 "trsvcid": "4420", 00:58:57.772 "trtype": "TCP" 00:58:57.772 }, 00:58:57.772 "peer_address": { 00:58:57.772 "adrfam": "IPv4", 00:58:57.772 "traddr": "10.0.0.1", 00:58:57.772 "trsvcid": "34376", 00:58:57.772 "trtype": "TCP" 00:58:57.772 }, 00:58:57.772 "qid": 0, 00:58:57.772 "state": "enabled", 00:58:57.772 "thread": "nvmf_tgt_poll_group_000" 00:58:57.772 } 00:58:57.772 ]' 00:58:57.772 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:58.032 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:58.032 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:58.032 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:58:58.032 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:58.032 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:58.032 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:58.032 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:58:58.290 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:58:58.290 05:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:58:58.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:58:58.857 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:59.116 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:58:59.374 00:58:59.374 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:58:59.374 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:58:59.374 05:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:58:59.941 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:58:59.941 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:58:59.941 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:58:59.941 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:58:59.941 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:58:59.941 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:58:59.941 { 00:58:59.941 "auth": { 00:58:59.941 "dhgroup": "ffdhe3072", 00:58:59.941 "digest": "sha256", 00:58:59.941 "state": "completed" 00:58:59.941 }, 00:58:59.941 "cntlid": 17, 00:58:59.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:58:59.941 "listen_address": { 00:58:59.941 "adrfam": "IPv4", 00:58:59.941 "traddr": "10.0.0.3", 00:58:59.941 "trsvcid": "4420", 00:58:59.941 "trtype": "TCP" 00:58:59.941 }, 00:58:59.941 "peer_address": { 00:58:59.941 "adrfam": "IPv4", 00:58:59.941 "traddr": "10.0.0.1", 00:58:59.941 "trsvcid": "34404", 00:58:59.941 "trtype": "TCP" 00:58:59.941 }, 00:58:59.941 "qid": 0, 00:58:59.941 "state": "enabled", 00:58:59.941 "thread": "nvmf_tgt_poll_group_000" 00:58:59.941 } 00:58:59.941 ]' 00:58:59.941 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:58:59.942 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:58:59.942 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:58:59.942 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:58:59.942 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:58:59.942 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:58:59.942 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:58:59.942 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:00.200 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:00.200 05:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:00.766 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:00.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:00.766 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:00.766 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:00.766 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:00.766 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:00.766 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:00.766 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:59:00.766 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:01.024 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:01.590 00:59:01.590 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:01.590 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:01.590 05:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:01.590 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:01.590 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:01.590 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:01.590 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:01.590 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:01.590 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:01.590 { 00:59:01.590 "auth": { 00:59:01.590 "dhgroup": "ffdhe3072", 00:59:01.590 "digest": "sha256", 00:59:01.590 "state": "completed" 00:59:01.590 }, 00:59:01.590 "cntlid": 19, 00:59:01.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:01.590 "listen_address": { 00:59:01.590 "adrfam": "IPv4", 00:59:01.590 "traddr": "10.0.0.3", 00:59:01.590 "trsvcid": "4420", 00:59:01.590 "trtype": "TCP" 00:59:01.590 }, 00:59:01.590 "peer_address": { 00:59:01.590 "adrfam": "IPv4", 00:59:01.590 "traddr": "10.0.0.1", 00:59:01.590 "trsvcid": "58770", 00:59:01.590 "trtype": "TCP" 00:59:01.590 }, 00:59:01.590 "qid": 0, 00:59:01.590 "state": "enabled", 00:59:01.590 "thread": "nvmf_tgt_poll_group_000" 00:59:01.590 } 00:59:01.590 ]' 00:59:01.590 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:01.848 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:01.848 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:01.848 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:01.848 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:01.848 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:01.848 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:01.848 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:02.105 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:02.105 05:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:02.672 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:02.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:02.672 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:02.672 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:02.672 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:02.672 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:02.672 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:02.672 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:59:02.672 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:02.930 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:03.495 00:59:03.495 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:03.495 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:03.495 05:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:03.769 { 00:59:03.769 "auth": { 00:59:03.769 "dhgroup": "ffdhe3072", 00:59:03.769 "digest": "sha256", 00:59:03.769 "state": "completed" 00:59:03.769 }, 00:59:03.769 "cntlid": 21, 00:59:03.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:03.769 "listen_address": { 00:59:03.769 "adrfam": "IPv4", 00:59:03.769 "traddr": "10.0.0.3", 00:59:03.769 "trsvcid": "4420", 00:59:03.769 "trtype": "TCP" 00:59:03.769 }, 00:59:03.769 "peer_address": { 00:59:03.769 "adrfam": "IPv4", 00:59:03.769 "traddr": "10.0.0.1", 00:59:03.769 "trsvcid": "58804", 00:59:03.769 "trtype": "TCP" 00:59:03.769 }, 00:59:03.769 "qid": 0, 00:59:03.769 "state": "enabled", 00:59:03.769 "thread": "nvmf_tgt_poll_group_000" 00:59:03.769 } 00:59:03.769 ]' 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:03.769 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:04.027 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:04.027 05:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:04.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:04.959 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:05.217 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:05.217 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:59:05.217 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:05.217 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:05.476 00:59:05.476 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:05.476 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:05.476 05:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:05.735 { 00:59:05.735 "auth": { 00:59:05.735 "dhgroup": "ffdhe3072", 00:59:05.735 "digest": "sha256", 00:59:05.735 "state": "completed" 00:59:05.735 }, 00:59:05.735 "cntlid": 23, 00:59:05.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:05.735 "listen_address": { 00:59:05.735 "adrfam": "IPv4", 00:59:05.735 "traddr": "10.0.0.3", 00:59:05.735 "trsvcid": "4420", 00:59:05.735 "trtype": "TCP" 00:59:05.735 }, 00:59:05.735 "peer_address": { 00:59:05.735 "adrfam": "IPv4", 00:59:05.735 "traddr": "10.0.0.1", 00:59:05.735 "trsvcid": "58842", 00:59:05.735 "trtype": "TCP" 00:59:05.735 }, 00:59:05.735 "qid": 0, 00:59:05.735 "state": "enabled", 00:59:05.735 "thread": "nvmf_tgt_poll_group_000" 00:59:05.735 } 00:59:05.735 ]' 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:05.735 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:05.994 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:05.994 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:05.994 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:06.253 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:06.253 05:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:06.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:59:06.824 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:07.082 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:07.649 00:59:07.649 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:07.649 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:07.649 05:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:07.908 { 00:59:07.908 "auth": { 00:59:07.908 "dhgroup": "ffdhe4096", 00:59:07.908 "digest": "sha256", 00:59:07.908 "state": "completed" 00:59:07.908 }, 00:59:07.908 "cntlid": 25, 00:59:07.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:07.908 "listen_address": { 00:59:07.908 "adrfam": "IPv4", 00:59:07.908 "traddr": "10.0.0.3", 00:59:07.908 "trsvcid": "4420", 00:59:07.908 "trtype": "TCP" 00:59:07.908 }, 00:59:07.908 "peer_address": { 00:59:07.908 "adrfam": "IPv4", 00:59:07.908 "traddr": "10.0.0.1", 00:59:07.908 "trsvcid": "58870", 00:59:07.908 "trtype": "TCP" 00:59:07.908 }, 00:59:07.908 "qid": 0, 00:59:07.908 "state": "enabled", 00:59:07.908 "thread": "nvmf_tgt_poll_group_000" 00:59:07.908 } 00:59:07.908 ]' 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:07.908 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:08.167 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:08.167 05:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:09.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:09.105 05:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:09.673 00:59:09.673 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:09.673 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:09.673 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:09.932 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:09.932 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:09.932 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:09.932 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:09.932 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:09.932 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:09.932 { 00:59:09.932 "auth": { 00:59:09.932 "dhgroup": "ffdhe4096", 00:59:09.933 "digest": "sha256", 00:59:09.933 "state": "completed" 00:59:09.933 }, 00:59:09.933 "cntlid": 27, 00:59:09.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:09.933 "listen_address": { 00:59:09.933 "adrfam": "IPv4", 00:59:09.933 "traddr": "10.0.0.3", 00:59:09.933 "trsvcid": "4420", 00:59:09.933 "trtype": "TCP" 00:59:09.933 }, 00:59:09.933 "peer_address": { 00:59:09.933 "adrfam": "IPv4", 00:59:09.933 "traddr": "10.0.0.1", 00:59:09.933 "trsvcid": "58900", 00:59:09.933 "trtype": "TCP" 00:59:09.933 }, 00:59:09.933 "qid": 0, 00:59:09.933 "state": "enabled", 00:59:09.933 "thread": "nvmf_tgt_poll_group_000" 00:59:09.933 } 00:59:09.933 ]' 00:59:09.933 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:09.933 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:09.933 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:09.933 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:09.933 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:10.192 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:10.192 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:10.192 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:10.450 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:10.451 05:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:11.016 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:11.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:11.016 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:11.016 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:11.016 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:11.016 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:11.016 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:11.016 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:59:11.016 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:11.273 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:11.274 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:11.274 05:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:11.839 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:11.839 { 00:59:11.839 "auth": { 00:59:11.839 "dhgroup": "ffdhe4096", 00:59:11.839 "digest": "sha256", 00:59:11.839 "state": "completed" 00:59:11.839 }, 00:59:11.839 "cntlid": 29, 00:59:11.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:11.839 "listen_address": { 00:59:11.839 "adrfam": "IPv4", 00:59:11.839 "traddr": "10.0.0.3", 00:59:11.839 "trsvcid": "4420", 00:59:11.839 "trtype": "TCP" 00:59:11.839 }, 00:59:11.839 "peer_address": { 00:59:11.839 "adrfam": "IPv4", 00:59:11.839 "traddr": "10.0.0.1", 00:59:11.839 "trsvcid": "39314", 00:59:11.839 "trtype": "TCP" 00:59:11.839 }, 00:59:11.839 "qid": 0, 00:59:11.839 "state": "enabled", 00:59:11.839 "thread": "nvmf_tgt_poll_group_000" 00:59:11.839 } 00:59:11.839 ]' 00:59:11.839 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:12.097 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:12.097 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:12.097 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:12.097 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:12.097 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:12.097 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:12.097 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:12.354 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:12.354 05:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:12.919 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:12.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:12.919 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:12.919 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:12.919 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:12.919 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:12.919 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:12.919 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:59:12.919 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:13.483 05:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:13.741 00:59:13.741 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:13.741 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:13.741 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:13.999 { 00:59:13.999 "auth": { 00:59:13.999 "dhgroup": "ffdhe4096", 00:59:13.999 "digest": "sha256", 00:59:13.999 "state": "completed" 00:59:13.999 }, 00:59:13.999 "cntlid": 31, 00:59:13.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:13.999 "listen_address": { 00:59:13.999 "adrfam": "IPv4", 00:59:13.999 "traddr": "10.0.0.3", 00:59:13.999 "trsvcid": "4420", 00:59:13.999 "trtype": "TCP" 00:59:13.999 }, 00:59:13.999 "peer_address": { 00:59:13.999 "adrfam": "IPv4", 00:59:13.999 "traddr": "10.0.0.1", 00:59:13.999 "trsvcid": "39340", 00:59:13.999 "trtype": "TCP" 00:59:13.999 }, 00:59:13.999 "qid": 0, 00:59:13.999 "state": "enabled", 00:59:13.999 "thread": "nvmf_tgt_poll_group_000" 00:59:13.999 } 00:59:13.999 ]' 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:13.999 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:14.257 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:14.257 05:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:14.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:59:14.823 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:15.082 05:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:15.649 00:59:15.649 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:15.649 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:15.649 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:15.907 { 00:59:15.907 "auth": { 00:59:15.907 "dhgroup": "ffdhe6144", 00:59:15.907 "digest": "sha256", 00:59:15.907 "state": "completed" 00:59:15.907 }, 00:59:15.907 "cntlid": 33, 00:59:15.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:15.907 "listen_address": { 00:59:15.907 "adrfam": "IPv4", 00:59:15.907 "traddr": "10.0.0.3", 00:59:15.907 "trsvcid": "4420", 00:59:15.907 "trtype": "TCP" 00:59:15.907 }, 00:59:15.907 "peer_address": { 00:59:15.907 "adrfam": "IPv4", 00:59:15.907 "traddr": "10.0.0.1", 00:59:15.907 "trsvcid": "39352", 00:59:15.907 "trtype": "TCP" 00:59:15.907 }, 00:59:15.907 "qid": 0, 00:59:15.907 "state": "enabled", 00:59:15.907 "thread": "nvmf_tgt_poll_group_000" 00:59:15.907 } 00:59:15.907 ]' 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:15.907 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:16.477 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:16.477 05:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:17.045 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:17.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:17.045 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:17.045 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:17.045 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:17.045 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:17.045 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:17.045 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:59:17.045 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:59:17.304 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:59:17.304 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:17.304 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:17.304 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:59:17.304 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:59:17.304 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:17.305 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:17.305 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:17.305 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:17.305 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:17.305 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:17.305 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:17.305 05:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:17.873 00:59:17.873 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:17.873 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:17.873 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:18.132 { 00:59:18.132 "auth": { 00:59:18.132 "dhgroup": "ffdhe6144", 00:59:18.132 "digest": "sha256", 00:59:18.132 "state": "completed" 00:59:18.132 }, 00:59:18.132 "cntlid": 35, 00:59:18.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:18.132 "listen_address": { 00:59:18.132 "adrfam": "IPv4", 00:59:18.132 "traddr": "10.0.0.3", 00:59:18.132 "trsvcid": "4420", 00:59:18.132 "trtype": "TCP" 00:59:18.132 }, 00:59:18.132 "peer_address": { 00:59:18.132 "adrfam": "IPv4", 00:59:18.132 "traddr": "10.0.0.1", 00:59:18.132 "trsvcid": "39384", 00:59:18.132 "trtype": "TCP" 00:59:18.132 }, 00:59:18.132 "qid": 0, 00:59:18.132 "state": "enabled", 00:59:18.132 "thread": "nvmf_tgt_poll_group_000" 00:59:18.132 } 00:59:18.132 ]' 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:18.132 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:18.391 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:18.391 05:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:18.959 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:18.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:18.959 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:18.959 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:18.959 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:18.959 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:18.959 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:18.959 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:59:18.959 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:19.219 05:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:19.788 00:59:19.788 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:19.788 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:19.788 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:20.047 { 00:59:20.047 "auth": { 00:59:20.047 "dhgroup": "ffdhe6144", 00:59:20.047 "digest": "sha256", 00:59:20.047 "state": "completed" 00:59:20.047 }, 00:59:20.047 "cntlid": 37, 00:59:20.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:20.047 "listen_address": { 00:59:20.047 "adrfam": "IPv4", 00:59:20.047 "traddr": "10.0.0.3", 00:59:20.047 "trsvcid": "4420", 00:59:20.047 "trtype": "TCP" 00:59:20.047 }, 00:59:20.047 "peer_address": { 00:59:20.047 "adrfam": "IPv4", 00:59:20.047 "traddr": "10.0.0.1", 00:59:20.047 "trsvcid": "39394", 00:59:20.047 "trtype": "TCP" 00:59:20.047 }, 00:59:20.047 "qid": 0, 00:59:20.047 "state": "enabled", 00:59:20.047 "thread": "nvmf_tgt_poll_group_000" 00:59:20.047 } 00:59:20.047 ]' 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:20.047 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:20.306 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:20.306 05:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:20.874 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:20.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:20.874 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:20.874 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:20.874 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:20.874 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:20.874 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:20.874 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:59:20.874 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:21.149 05:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:21.715 00:59:21.715 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:21.715 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:21.715 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:21.973 { 00:59:21.973 "auth": { 00:59:21.973 "dhgroup": "ffdhe6144", 00:59:21.973 "digest": "sha256", 00:59:21.973 "state": "completed" 00:59:21.973 }, 00:59:21.973 "cntlid": 39, 00:59:21.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:21.973 "listen_address": { 00:59:21.973 "adrfam": "IPv4", 00:59:21.973 "traddr": "10.0.0.3", 00:59:21.973 "trsvcid": "4420", 00:59:21.973 "trtype": "TCP" 00:59:21.973 }, 00:59:21.973 "peer_address": { 00:59:21.973 "adrfam": "IPv4", 00:59:21.973 "traddr": "10.0.0.1", 00:59:21.973 "trsvcid": "34158", 00:59:21.973 "trtype": "TCP" 00:59:21.973 }, 00:59:21.973 "qid": 0, 00:59:21.973 "state": "enabled", 00:59:21.973 "thread": "nvmf_tgt_poll_group_000" 00:59:21.973 } 00:59:21.973 ]' 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:59:21.973 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:22.231 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:22.231 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:22.231 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:22.491 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:22.491 05:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:23.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:59:23.085 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:59:23.360 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:59:23.360 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:23.360 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:23.360 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:59:23.360 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:59:23.361 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:23.361 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:23.361 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:23.361 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:23.361 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:23.361 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:23.361 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:23.361 05:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:23.932 00:59:23.932 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:23.932 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:23.932 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:24.191 { 00:59:24.191 "auth": { 00:59:24.191 "dhgroup": "ffdhe8192", 00:59:24.191 "digest": "sha256", 00:59:24.191 "state": "completed" 00:59:24.191 }, 00:59:24.191 "cntlid": 41, 00:59:24.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:24.191 "listen_address": { 00:59:24.191 "adrfam": "IPv4", 00:59:24.191 "traddr": "10.0.0.3", 00:59:24.191 "trsvcid": "4420", 00:59:24.191 "trtype": "TCP" 00:59:24.191 }, 00:59:24.191 "peer_address": { 00:59:24.191 "adrfam": "IPv4", 00:59:24.191 "traddr": "10.0.0.1", 00:59:24.191 "trsvcid": "34192", 00:59:24.191 "trtype": "TCP" 00:59:24.191 }, 00:59:24.191 "qid": 0, 00:59:24.191 "state": "enabled", 00:59:24.191 "thread": "nvmf_tgt_poll_group_000" 00:59:24.191 } 00:59:24.191 ]' 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:24.191 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:24.449 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:24.449 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:24.449 05:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:24.707 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:24.707 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:25.272 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:25.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:25.272 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:25.272 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:25.272 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:25.272 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:25.272 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:25.272 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:59:25.272 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:25.530 05:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:26.098 00:59:26.098 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:26.098 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:26.098 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:26.357 { 00:59:26.357 "auth": { 00:59:26.357 "dhgroup": "ffdhe8192", 00:59:26.357 "digest": "sha256", 00:59:26.357 "state": "completed" 00:59:26.357 }, 00:59:26.357 "cntlid": 43, 00:59:26.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:26.357 "listen_address": { 00:59:26.357 "adrfam": "IPv4", 00:59:26.357 "traddr": "10.0.0.3", 00:59:26.357 "trsvcid": "4420", 00:59:26.357 "trtype": "TCP" 00:59:26.357 }, 00:59:26.357 "peer_address": { 00:59:26.357 "adrfam": "IPv4", 00:59:26.357 "traddr": "10.0.0.1", 00:59:26.357 "trsvcid": "34214", 00:59:26.357 "trtype": "TCP" 00:59:26.357 }, 00:59:26.357 "qid": 0, 00:59:26.357 "state": "enabled", 00:59:26.357 "thread": "nvmf_tgt_poll_group_000" 00:59:26.357 } 00:59:26.357 ]' 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:26.357 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:26.616 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:26.616 05:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:26.616 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:26.616 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:26.616 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:26.875 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:26.875 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:27.443 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:27.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:27.443 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:27.443 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:27.443 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:27.443 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:27.443 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:27.443 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:59:27.443 05:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:59:27.703 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:59:27.703 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:27.703 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:27.703 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:59:27.703 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:59:27.703 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:27.703 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:27.703 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:27.704 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:27.704 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:27.704 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:27.704 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:27.704 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:28.272 00:59:28.272 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:28.272 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:28.272 05:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:28.533 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:28.533 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:28.533 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:28.533 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:28.533 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:28.533 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:28.533 { 00:59:28.533 "auth": { 00:59:28.533 "dhgroup": "ffdhe8192", 00:59:28.533 "digest": "sha256", 00:59:28.533 "state": "completed" 00:59:28.533 }, 00:59:28.533 "cntlid": 45, 00:59:28.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:28.533 "listen_address": { 00:59:28.533 "adrfam": "IPv4", 00:59:28.533 "traddr": "10.0.0.3", 00:59:28.533 "trsvcid": "4420", 00:59:28.533 "trtype": "TCP" 00:59:28.533 }, 00:59:28.533 "peer_address": { 00:59:28.533 "adrfam": "IPv4", 00:59:28.533 "traddr": "10.0.0.1", 00:59:28.533 "trsvcid": "34240", 00:59:28.533 "trtype": "TCP" 00:59:28.533 }, 00:59:28.533 "qid": 0, 00:59:28.533 "state": "enabled", 00:59:28.533 "thread": "nvmf_tgt_poll_group_000" 00:59:28.533 } 00:59:28.533 ]' 00:59:28.533 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:28.792 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:28.792 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:28.792 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:28.792 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:28.792 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:28.792 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:28.792 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:29.052 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:29.052 05:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:29.621 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:29.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:29.621 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:29.621 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:29.621 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:29.879 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:29.879 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:29.879 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:59:29.879 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:30.138 05:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:30.706 00:59:30.706 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:30.706 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:30.706 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:30.964 { 00:59:30.964 "auth": { 00:59:30.964 "dhgroup": "ffdhe8192", 00:59:30.964 "digest": "sha256", 00:59:30.964 "state": "completed" 00:59:30.964 }, 00:59:30.964 "cntlid": 47, 00:59:30.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:30.964 "listen_address": { 00:59:30.964 "adrfam": "IPv4", 00:59:30.964 "traddr": "10.0.0.3", 00:59:30.964 "trsvcid": "4420", 00:59:30.964 "trtype": "TCP" 00:59:30.964 }, 00:59:30.964 "peer_address": { 00:59:30.964 "adrfam": "IPv4", 00:59:30.964 "traddr": "10.0.0.1", 00:59:30.964 "trsvcid": "34670", 00:59:30.964 "trtype": "TCP" 00:59:30.964 }, 00:59:30.964 "qid": 0, 00:59:30.964 "state": "enabled", 00:59:30.964 "thread": "nvmf_tgt_poll_group_000" 00:59:30.964 } 00:59:30.964 ]' 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:30.964 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:31.222 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:31.222 05:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:31.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:59:31.789 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:59:32.048 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:59:32.048 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:32.048 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:32.048 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:59:32.048 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:59:32.048 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:32.049 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:32.049 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:32.049 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:32.049 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:32.049 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:32.049 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:32.049 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:32.617 00:59:32.617 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:32.617 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:32.617 05:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:32.617 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:32.617 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:32.617 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:32.617 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:32.617 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:32.617 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:32.617 { 00:59:32.617 "auth": { 00:59:32.617 "dhgroup": "null", 00:59:32.617 "digest": "sha384", 00:59:32.617 "state": "completed" 00:59:32.617 }, 00:59:32.617 "cntlid": 49, 00:59:32.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:32.617 "listen_address": { 00:59:32.617 "adrfam": "IPv4", 00:59:32.617 "traddr": "10.0.0.3", 00:59:32.617 "trsvcid": "4420", 00:59:32.617 "trtype": "TCP" 00:59:32.617 }, 00:59:32.617 "peer_address": { 00:59:32.617 "adrfam": "IPv4", 00:59:32.617 "traddr": "10.0.0.1", 00:59:32.617 "trsvcid": "34690", 00:59:32.617 "trtype": "TCP" 00:59:32.617 }, 00:59:32.617 "qid": 0, 00:59:32.617 "state": "enabled", 00:59:32.617 "thread": "nvmf_tgt_poll_group_000" 00:59:32.617 } 00:59:32.617 ]' 00:59:32.617 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:32.876 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:32.876 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:32.876 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:59:32.876 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:32.876 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:32.876 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:32.876 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:33.135 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:33.135 05:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:33.701 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:33.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:33.701 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:33.701 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:33.701 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:33.701 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:33.701 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:33.701 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:59:33.701 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:33.960 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:34.525 00:59:34.525 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:34.525 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:34.525 05:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:34.784 { 00:59:34.784 "auth": { 00:59:34.784 "dhgroup": "null", 00:59:34.784 "digest": "sha384", 00:59:34.784 "state": "completed" 00:59:34.784 }, 00:59:34.784 "cntlid": 51, 00:59:34.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:34.784 "listen_address": { 00:59:34.784 "adrfam": "IPv4", 00:59:34.784 "traddr": "10.0.0.3", 00:59:34.784 "trsvcid": "4420", 00:59:34.784 "trtype": "TCP" 00:59:34.784 }, 00:59:34.784 "peer_address": { 00:59:34.784 "adrfam": "IPv4", 00:59:34.784 "traddr": "10.0.0.1", 00:59:34.784 "trsvcid": "34710", 00:59:34.784 "trtype": "TCP" 00:59:34.784 }, 00:59:34.784 "qid": 0, 00:59:34.784 "state": "enabled", 00:59:34.784 "thread": "nvmf_tgt_poll_group_000" 00:59:34.784 } 00:59:34.784 ]' 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:34.784 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:35.099 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:35.099 05:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:35.664 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:35.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:35.664 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:35.664 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:35.664 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:35.664 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:35.664 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:35.664 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:59:35.664 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:35.922 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:35.923 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:35.923 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:35.923 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:36.489 00:59:36.489 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:36.489 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:36.489 05:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:36.746 { 00:59:36.746 "auth": { 00:59:36.746 "dhgroup": "null", 00:59:36.746 "digest": "sha384", 00:59:36.746 "state": "completed" 00:59:36.746 }, 00:59:36.746 "cntlid": 53, 00:59:36.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:36.746 "listen_address": { 00:59:36.746 "adrfam": "IPv4", 00:59:36.746 "traddr": "10.0.0.3", 00:59:36.746 "trsvcid": "4420", 00:59:36.746 "trtype": "TCP" 00:59:36.746 }, 00:59:36.746 "peer_address": { 00:59:36.746 "adrfam": "IPv4", 00:59:36.746 "traddr": "10.0.0.1", 00:59:36.746 "trsvcid": "34748", 00:59:36.746 "trtype": "TCP" 00:59:36.746 }, 00:59:36.746 "qid": 0, 00:59:36.746 "state": "enabled", 00:59:36.746 "thread": "nvmf_tgt_poll_group_000" 00:59:36.746 } 00:59:36.746 ]' 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:36.746 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:37.312 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:37.312 05:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:37.878 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:37.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:37.878 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:37.878 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:37.878 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:37.878 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:37.878 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:37.878 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:59:37.878 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:38.137 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:38.395 00:59:38.395 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:38.395 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:38.395 05:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:38.654 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:38.654 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:38.654 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:38.654 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:38.654 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:38.654 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:38.654 { 00:59:38.654 "auth": { 00:59:38.654 "dhgroup": "null", 00:59:38.654 "digest": "sha384", 00:59:38.654 "state": "completed" 00:59:38.654 }, 00:59:38.654 "cntlid": 55, 00:59:38.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:38.654 "listen_address": { 00:59:38.654 "adrfam": "IPv4", 00:59:38.654 "traddr": "10.0.0.3", 00:59:38.654 "trsvcid": "4420", 00:59:38.654 "trtype": "TCP" 00:59:38.654 }, 00:59:38.654 "peer_address": { 00:59:38.654 "adrfam": "IPv4", 00:59:38.654 "traddr": "10.0.0.1", 00:59:38.654 "trsvcid": "34792", 00:59:38.654 "trtype": "TCP" 00:59:38.654 }, 00:59:38.654 "qid": 0, 00:59:38.654 "state": "enabled", 00:59:38.654 "thread": "nvmf_tgt_poll_group_000" 00:59:38.654 } 00:59:38.654 ]' 00:59:38.654 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:38.913 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:38.913 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:38.913 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:59:38.913 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:38.913 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:38.913 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:38.913 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:39.172 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:39.172 05:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:39.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:59:39.739 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:39.998 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:40.566 00:59:40.566 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:40.566 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:40.566 05:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:40.823 { 00:59:40.823 "auth": { 00:59:40.823 "dhgroup": "ffdhe2048", 00:59:40.823 "digest": "sha384", 00:59:40.823 "state": "completed" 00:59:40.823 }, 00:59:40.823 "cntlid": 57, 00:59:40.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:40.823 "listen_address": { 00:59:40.823 "adrfam": "IPv4", 00:59:40.823 "traddr": "10.0.0.3", 00:59:40.823 "trsvcid": "4420", 00:59:40.823 "trtype": "TCP" 00:59:40.823 }, 00:59:40.823 "peer_address": { 00:59:40.823 "adrfam": "IPv4", 00:59:40.823 "traddr": "10.0.0.1", 00:59:40.823 "trsvcid": "53620", 00:59:40.823 "trtype": "TCP" 00:59:40.823 }, 00:59:40.823 "qid": 0, 00:59:40.823 "state": "enabled", 00:59:40.823 "thread": "nvmf_tgt_poll_group_000" 00:59:40.823 } 00:59:40.823 ]' 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:40.823 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:41.080 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:41.080 05:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:42.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:42.015 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:42.583 00:59:42.583 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:42.583 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:42.583 05:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:42.842 { 00:59:42.842 "auth": { 00:59:42.842 "dhgroup": "ffdhe2048", 00:59:42.842 "digest": "sha384", 00:59:42.842 "state": "completed" 00:59:42.842 }, 00:59:42.842 "cntlid": 59, 00:59:42.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:42.842 "listen_address": { 00:59:42.842 "adrfam": "IPv4", 00:59:42.842 "traddr": "10.0.0.3", 00:59:42.842 "trsvcid": "4420", 00:59:42.842 "trtype": "TCP" 00:59:42.842 }, 00:59:42.842 "peer_address": { 00:59:42.842 "adrfam": "IPv4", 00:59:42.842 "traddr": "10.0.0.1", 00:59:42.842 "trsvcid": "53648", 00:59:42.842 "trtype": "TCP" 00:59:42.842 }, 00:59:42.842 "qid": 0, 00:59:42.842 "state": "enabled", 00:59:42.842 "thread": "nvmf_tgt_poll_group_000" 00:59:42.842 } 00:59:42.842 ]' 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:42.842 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:43.102 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:43.102 05:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:44.039 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:44.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:44.039 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:44.039 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.039 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:44.040 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.040 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:44.040 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:59:44.040 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:44.298 05:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:44.556 00:59:44.556 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:44.556 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:44.556 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:44.814 { 00:59:44.814 "auth": { 00:59:44.814 "dhgroup": "ffdhe2048", 00:59:44.814 "digest": "sha384", 00:59:44.814 "state": "completed" 00:59:44.814 }, 00:59:44.814 "cntlid": 61, 00:59:44.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:44.814 "listen_address": { 00:59:44.814 "adrfam": "IPv4", 00:59:44.814 "traddr": "10.0.0.3", 00:59:44.814 "trsvcid": "4420", 00:59:44.814 "trtype": "TCP" 00:59:44.814 }, 00:59:44.814 "peer_address": { 00:59:44.814 "adrfam": "IPv4", 00:59:44.814 "traddr": "10.0.0.1", 00:59:44.814 "trsvcid": "53682", 00:59:44.814 "trtype": "TCP" 00:59:44.814 }, 00:59:44.814 "qid": 0, 00:59:44.814 "state": "enabled", 00:59:44.814 "thread": "nvmf_tgt_poll_group_000" 00:59:44.814 } 00:59:44.814 ]' 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:44.814 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:45.071 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:59:45.071 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:45.071 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:45.071 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:45.071 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:45.330 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:45.330 05:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:45.896 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:45.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:45.896 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:45.896 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:45.896 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:45.896 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:45.896 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:45.896 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:59:45.897 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:46.155 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:46.413 00:59:46.671 05:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:46.671 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:46.671 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:46.929 { 00:59:46.929 "auth": { 00:59:46.929 "dhgroup": "ffdhe2048", 00:59:46.929 "digest": "sha384", 00:59:46.929 "state": "completed" 00:59:46.929 }, 00:59:46.929 "cntlid": 63, 00:59:46.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:46.929 "listen_address": { 00:59:46.929 "adrfam": "IPv4", 00:59:46.929 "traddr": "10.0.0.3", 00:59:46.929 "trsvcid": "4420", 00:59:46.929 "trtype": "TCP" 00:59:46.929 }, 00:59:46.929 "peer_address": { 00:59:46.929 "adrfam": "IPv4", 00:59:46.929 "traddr": "10.0.0.1", 00:59:46.929 "trsvcid": "53694", 00:59:46.929 "trtype": "TCP" 00:59:46.929 }, 00:59:46.929 "qid": 0, 00:59:46.929 "state": "enabled", 00:59:46.929 "thread": "nvmf_tgt_poll_group_000" 00:59:46.929 } 00:59:46.929 ]' 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:46.929 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:47.188 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:47.189 05:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:48.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:59:48.136 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:48.395 05:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:48.654 00:59:48.654 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:48.654 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:48.654 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:48.912 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:48.912 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:48.912 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:48.912 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:48.912 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:48.912 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:48.912 { 00:59:48.912 "auth": { 00:59:48.912 "dhgroup": "ffdhe3072", 00:59:48.912 "digest": "sha384", 00:59:48.912 "state": "completed" 00:59:48.912 }, 00:59:48.912 "cntlid": 65, 00:59:48.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:48.912 "listen_address": { 00:59:48.912 "adrfam": "IPv4", 00:59:48.912 "traddr": "10.0.0.3", 00:59:48.912 "trsvcid": "4420", 00:59:48.912 "trtype": "TCP" 00:59:48.912 }, 00:59:48.912 "peer_address": { 00:59:48.912 "adrfam": "IPv4", 00:59:48.912 "traddr": "10.0.0.1", 00:59:48.912 "trsvcid": "53722", 00:59:48.912 "trtype": "TCP" 00:59:48.912 }, 00:59:48.912 "qid": 0, 00:59:48.912 "state": "enabled", 00:59:48.912 "thread": "nvmf_tgt_poll_group_000" 00:59:48.912 } 00:59:48.912 ]' 00:59:48.912 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:49.188 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:49.188 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:49.188 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:49.188 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:49.188 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:49.188 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:49.188 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:49.479 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:49.479 05:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:50.055 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:50.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:50.056 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:50.056 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:50.056 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:50.056 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:50.056 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:50.056 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:59:50.056 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:50.314 05:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:50.882 00:59:50.882 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:50.882 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:50.882 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:51.141 { 00:59:51.141 "auth": { 00:59:51.141 "dhgroup": "ffdhe3072", 00:59:51.141 "digest": "sha384", 00:59:51.141 "state": "completed" 00:59:51.141 }, 00:59:51.141 "cntlid": 67, 00:59:51.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:51.141 "listen_address": { 00:59:51.141 "adrfam": "IPv4", 00:59:51.141 "traddr": "10.0.0.3", 00:59:51.141 "trsvcid": "4420", 00:59:51.141 "trtype": "TCP" 00:59:51.141 }, 00:59:51.141 "peer_address": { 00:59:51.141 "adrfam": "IPv4", 00:59:51.141 "traddr": "10.0.0.1", 00:59:51.141 "trsvcid": "39916", 00:59:51.141 "trtype": "TCP" 00:59:51.141 }, 00:59:51.141 "qid": 0, 00:59:51.141 "state": "enabled", 00:59:51.141 "thread": "nvmf_tgt_poll_group_000" 00:59:51.141 } 00:59:51.141 ]' 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:51.141 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:51.399 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:51.400 05:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:51.966 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:51.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:51.966 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:51.966 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:51.966 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:51.966 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:51.966 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:51.966 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:59:51.966 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:52.533 05:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:59:52.792 00:59:52.792 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:52.792 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:52.792 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:53.051 { 00:59:53.051 "auth": { 00:59:53.051 "dhgroup": "ffdhe3072", 00:59:53.051 "digest": "sha384", 00:59:53.051 "state": "completed" 00:59:53.051 }, 00:59:53.051 "cntlid": 69, 00:59:53.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:53.051 "listen_address": { 00:59:53.051 "adrfam": "IPv4", 00:59:53.051 "traddr": "10.0.0.3", 00:59:53.051 "trsvcid": "4420", 00:59:53.051 "trtype": "TCP" 00:59:53.051 }, 00:59:53.051 "peer_address": { 00:59:53.051 "adrfam": "IPv4", 00:59:53.051 "traddr": "10.0.0.1", 00:59:53.051 "trsvcid": "39946", 00:59:53.051 "trtype": "TCP" 00:59:53.051 }, 00:59:53.051 "qid": 0, 00:59:53.051 "state": "enabled", 00:59:53.051 "thread": "nvmf_tgt_poll_group_000" 00:59:53.051 } 00:59:53.051 ]' 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:53.051 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:53.618 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:53.618 05:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 00:59:54.187 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:54.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:54.187 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:54.187 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:54.187 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:54.187 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:54.187 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:54.187 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:59:54.187 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:54.446 05:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:59:54.706 00:59:54.706 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:54.706 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:54.706 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:54.965 { 00:59:54.965 "auth": { 00:59:54.965 "dhgroup": "ffdhe3072", 00:59:54.965 "digest": "sha384", 00:59:54.965 "state": "completed" 00:59:54.965 }, 00:59:54.965 "cntlid": 71, 00:59:54.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:54.965 "listen_address": { 00:59:54.965 "adrfam": "IPv4", 00:59:54.965 "traddr": "10.0.0.3", 00:59:54.965 "trsvcid": "4420", 00:59:54.965 "trtype": "TCP" 00:59:54.965 }, 00:59:54.965 "peer_address": { 00:59:54.965 "adrfam": "IPv4", 00:59:54.965 "traddr": "10.0.0.1", 00:59:54.965 "trsvcid": "39974", 00:59:54.965 "trtype": "TCP" 00:59:54.965 }, 00:59:54.965 "qid": 0, 00:59:54.965 "state": "enabled", 00:59:54.965 "thread": "nvmf_tgt_poll_group_000" 00:59:54.965 } 00:59:54.965 ]' 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:54.965 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:55.223 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:55.223 05:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:56.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:59:56.161 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:56.428 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:56.429 05:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:59:56.686 00:59:56.686 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:56.686 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:56.686 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:56.945 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:56.945 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:56.945 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:56.945 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:56.945 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:56.945 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:56.945 { 00:59:56.945 "auth": { 00:59:56.945 "dhgroup": "ffdhe4096", 00:59:56.945 "digest": "sha384", 00:59:56.945 "state": "completed" 00:59:56.945 }, 00:59:56.945 "cntlid": 73, 00:59:56.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:56.945 "listen_address": { 00:59:56.945 "adrfam": "IPv4", 00:59:56.945 "traddr": "10.0.0.3", 00:59:56.945 "trsvcid": "4420", 00:59:56.945 "trtype": "TCP" 00:59:56.945 }, 00:59:56.945 "peer_address": { 00:59:56.945 "adrfam": "IPv4", 00:59:56.945 "traddr": "10.0.0.1", 00:59:56.945 "trsvcid": "40006", 00:59:56.945 "trtype": "TCP" 00:59:56.945 }, 00:59:56.945 "qid": 0, 00:59:56.945 "state": "enabled", 00:59:56.945 "thread": "nvmf_tgt_poll_group_000" 00:59:56.945 } 00:59:56.945 ]' 00:59:56.945 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:57.203 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:57.203 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:57.203 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:57.203 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:57.203 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:57.203 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:57.203 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:57.460 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:57.460 05:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:59:58.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:58.394 05:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:59:58.961 00:59:58.961 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:59:58.961 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:59:58.961 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:59:59.220 { 00:59:59.220 "auth": { 00:59:59.220 "dhgroup": "ffdhe4096", 00:59:59.220 "digest": "sha384", 00:59:59.220 "state": "completed" 00:59:59.220 }, 00:59:59.220 "cntlid": 75, 00:59:59.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 00:59:59.220 "listen_address": { 00:59:59.220 "adrfam": "IPv4", 00:59:59.220 "traddr": "10.0.0.3", 00:59:59.220 "trsvcid": "4420", 00:59:59.220 "trtype": "TCP" 00:59:59.220 }, 00:59:59.220 "peer_address": { 00:59:59.220 "adrfam": "IPv4", 00:59:59.220 "traddr": "10.0.0.1", 00:59:59.220 "trsvcid": "40036", 00:59:59.220 "trtype": "TCP" 00:59:59.220 }, 00:59:59.220 "qid": 0, 00:59:59.220 "state": "enabled", 00:59:59.220 "thread": "nvmf_tgt_poll_group_000" 00:59:59.220 } 00:59:59.220 ]' 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:59:59.220 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:59:59.479 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:59:59.479 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:59:59.479 05:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:59:59.738 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 00:59:59.738 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:00.304 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:00.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:00.304 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:00.304 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:00.304 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:00.304 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:00.304 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:00.304 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:00:00.304 05:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:00.564 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:01.130 01:00:01.130 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:01.130 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:01.130 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:01.388 { 01:00:01.388 "auth": { 01:00:01.388 "dhgroup": "ffdhe4096", 01:00:01.388 "digest": "sha384", 01:00:01.388 "state": "completed" 01:00:01.388 }, 01:00:01.388 "cntlid": 77, 01:00:01.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:01.388 "listen_address": { 01:00:01.388 "adrfam": "IPv4", 01:00:01.388 "traddr": "10.0.0.3", 01:00:01.388 "trsvcid": "4420", 01:00:01.388 "trtype": "TCP" 01:00:01.388 }, 01:00:01.388 "peer_address": { 01:00:01.388 "adrfam": "IPv4", 01:00:01.388 "traddr": "10.0.0.1", 01:00:01.388 "trsvcid": "52448", 01:00:01.388 "trtype": "TCP" 01:00:01.388 }, 01:00:01.388 "qid": 0, 01:00:01.388 "state": "enabled", 01:00:01.388 "thread": "nvmf_tgt_poll_group_000" 01:00:01.388 } 01:00:01.388 ]' 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:01.388 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:01.647 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:00:01.647 05:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:01.647 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:01.648 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:01.648 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:01.906 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:01.906 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:02.480 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:02.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:02.480 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:02.480 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:02.480 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:02.480 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:02.480 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:02.480 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:00:02.480 05:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:02.739 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:03.315 01:00:03.315 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:03.315 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:03.315 05:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:03.573 { 01:00:03.573 "auth": { 01:00:03.573 "dhgroup": "ffdhe4096", 01:00:03.573 "digest": "sha384", 01:00:03.573 "state": "completed" 01:00:03.573 }, 01:00:03.573 "cntlid": 79, 01:00:03.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:03.573 "listen_address": { 01:00:03.573 "adrfam": "IPv4", 01:00:03.573 "traddr": "10.0.0.3", 01:00:03.573 "trsvcid": "4420", 01:00:03.573 "trtype": "TCP" 01:00:03.573 }, 01:00:03.573 "peer_address": { 01:00:03.573 "adrfam": "IPv4", 01:00:03.573 "traddr": "10.0.0.1", 01:00:03.573 "trsvcid": "52472", 01:00:03.573 "trtype": "TCP" 01:00:03.573 }, 01:00:03.573 "qid": 0, 01:00:03.573 "state": "enabled", 01:00:03.573 "thread": "nvmf_tgt_poll_group_000" 01:00:03.573 } 01:00:03.573 ]' 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:00:03.573 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:03.831 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:03.831 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:03.831 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:04.090 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:04.090 05:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:04.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:00:04.658 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:04.918 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:05.486 01:00:05.486 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:05.486 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:05.486 05:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:05.745 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:05.745 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:05.745 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:05.745 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:05.745 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:05.745 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:05.745 { 01:00:05.745 "auth": { 01:00:05.745 "dhgroup": "ffdhe6144", 01:00:05.745 "digest": "sha384", 01:00:05.745 "state": "completed" 01:00:05.745 }, 01:00:05.745 "cntlid": 81, 01:00:05.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:05.745 "listen_address": { 01:00:05.745 "adrfam": "IPv4", 01:00:05.745 "traddr": "10.0.0.3", 01:00:05.745 "trsvcid": "4420", 01:00:05.745 "trtype": "TCP" 01:00:05.745 }, 01:00:05.745 "peer_address": { 01:00:05.745 "adrfam": "IPv4", 01:00:05.745 "traddr": "10.0.0.1", 01:00:05.745 "trsvcid": "52506", 01:00:05.745 "trtype": "TCP" 01:00:05.745 }, 01:00:05.745 "qid": 0, 01:00:05.745 "state": "enabled", 01:00:05.745 "thread": "nvmf_tgt_poll_group_000" 01:00:05.745 } 01:00:05.746 ]' 01:00:05.746 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:05.746 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:05.746 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:05.746 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:00:05.746 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:06.005 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:06.005 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:06.005 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:06.264 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:06.264 05:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:06.833 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:06.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:06.833 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:06.833 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:06.833 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:06.833 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:06.833 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:06.833 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:00:06.833 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:07.091 05:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:07.657 01:00:07.657 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:07.657 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:07.657 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:07.916 { 01:00:07.916 "auth": { 01:00:07.916 "dhgroup": "ffdhe6144", 01:00:07.916 "digest": "sha384", 01:00:07.916 "state": "completed" 01:00:07.916 }, 01:00:07.916 "cntlid": 83, 01:00:07.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:07.916 "listen_address": { 01:00:07.916 "adrfam": "IPv4", 01:00:07.916 "traddr": "10.0.0.3", 01:00:07.916 "trsvcid": "4420", 01:00:07.916 "trtype": "TCP" 01:00:07.916 }, 01:00:07.916 "peer_address": { 01:00:07.916 "adrfam": "IPv4", 01:00:07.916 "traddr": "10.0.0.1", 01:00:07.916 "trsvcid": "52544", 01:00:07.916 "trtype": "TCP" 01:00:07.916 }, 01:00:07.916 "qid": 0, 01:00:07.916 "state": "enabled", 01:00:07.916 "thread": "nvmf_tgt_poll_group_000" 01:00:07.916 } 01:00:07.916 ]' 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:00:07.916 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:08.175 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:08.175 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:08.175 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:08.433 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:08.433 05:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:09.001 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:09.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:09.001 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:09.001 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.001 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:09.001 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.001 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:09.001 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:00:09.001 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:09.260 05:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:09.828 01:00:09.828 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:09.828 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:09.829 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:10.087 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:10.087 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:10.087 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:10.087 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:10.087 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:10.087 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:10.087 { 01:00:10.087 "auth": { 01:00:10.088 "dhgroup": "ffdhe6144", 01:00:10.088 "digest": "sha384", 01:00:10.088 "state": "completed" 01:00:10.088 }, 01:00:10.088 "cntlid": 85, 01:00:10.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:10.088 "listen_address": { 01:00:10.088 "adrfam": "IPv4", 01:00:10.088 "traddr": "10.0.0.3", 01:00:10.088 "trsvcid": "4420", 01:00:10.088 "trtype": "TCP" 01:00:10.088 }, 01:00:10.088 "peer_address": { 01:00:10.088 "adrfam": "IPv4", 01:00:10.088 "traddr": "10.0.0.1", 01:00:10.088 "trsvcid": "52564", 01:00:10.088 "trtype": "TCP" 01:00:10.088 }, 01:00:10.088 "qid": 0, 01:00:10.088 "state": "enabled", 01:00:10.088 "thread": "nvmf_tgt_poll_group_000" 01:00:10.088 } 01:00:10.088 ]' 01:00:10.088 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:10.088 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:10.088 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:10.347 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:00:10.347 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:10.347 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:10.347 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:10.347 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:10.606 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:10.606 05:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:11.176 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:11.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:11.176 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:11.176 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.176 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:11.176 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.176 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:11.176 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:00:11.176 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:00:11.434 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:11.435 05:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:12.001 01:00:12.001 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:12.001 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:12.001 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:12.261 { 01:00:12.261 "auth": { 01:00:12.261 "dhgroup": "ffdhe6144", 01:00:12.261 "digest": "sha384", 01:00:12.261 "state": "completed" 01:00:12.261 }, 01:00:12.261 "cntlid": 87, 01:00:12.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:12.261 "listen_address": { 01:00:12.261 "adrfam": "IPv4", 01:00:12.261 "traddr": "10.0.0.3", 01:00:12.261 "trsvcid": "4420", 01:00:12.261 "trtype": "TCP" 01:00:12.261 }, 01:00:12.261 "peer_address": { 01:00:12.261 "adrfam": "IPv4", 01:00:12.261 "traddr": "10.0.0.1", 01:00:12.261 "trsvcid": "39482", 01:00:12.261 "trtype": "TCP" 01:00:12.261 }, 01:00:12.261 "qid": 0, 01:00:12.261 "state": "enabled", 01:00:12.261 "thread": "nvmf_tgt_poll_group_000" 01:00:12.261 } 01:00:12.261 ]' 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:00:12.261 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:12.520 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:12.520 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:12.520 05:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:12.779 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:12.779 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:13.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:00:13.347 05:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:13.606 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:14.174 01:00:14.174 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:14.175 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:14.175 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:14.433 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:14.434 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:14.434 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:14.434 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:14.434 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:14.434 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:14.434 { 01:00:14.434 "auth": { 01:00:14.434 "dhgroup": "ffdhe8192", 01:00:14.434 "digest": "sha384", 01:00:14.434 "state": "completed" 01:00:14.434 }, 01:00:14.434 "cntlid": 89, 01:00:14.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:14.434 "listen_address": { 01:00:14.434 "adrfam": "IPv4", 01:00:14.434 "traddr": "10.0.0.3", 01:00:14.434 "trsvcid": "4420", 01:00:14.434 "trtype": "TCP" 01:00:14.434 }, 01:00:14.434 "peer_address": { 01:00:14.434 "adrfam": "IPv4", 01:00:14.434 "traddr": "10.0.0.1", 01:00:14.434 "trsvcid": "39528", 01:00:14.434 "trtype": "TCP" 01:00:14.434 }, 01:00:14.434 "qid": 0, 01:00:14.434 "state": "enabled", 01:00:14.434 "thread": "nvmf_tgt_poll_group_000" 01:00:14.434 } 01:00:14.434 ]' 01:00:14.434 05:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:14.434 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:14.434 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:14.694 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:00:14.694 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:14.694 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:14.694 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:14.694 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:14.952 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:14.952 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:15.520 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:15.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:15.520 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:15.520 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:15.520 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:15.520 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:15.520 05:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:15.520 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:00:15.520 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:15.779 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:15.780 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:16.346 01:00:16.605 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:16.605 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:16.605 05:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:16.864 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:16.864 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:16.864 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:16.864 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:16.864 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:16.864 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:16.864 { 01:00:16.864 "auth": { 01:00:16.864 "dhgroup": "ffdhe8192", 01:00:16.864 "digest": "sha384", 01:00:16.865 "state": "completed" 01:00:16.865 }, 01:00:16.865 "cntlid": 91, 01:00:16.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:16.865 "listen_address": { 01:00:16.865 "adrfam": "IPv4", 01:00:16.865 "traddr": "10.0.0.3", 01:00:16.865 "trsvcid": "4420", 01:00:16.865 "trtype": "TCP" 01:00:16.865 }, 01:00:16.865 "peer_address": { 01:00:16.865 "adrfam": "IPv4", 01:00:16.865 "traddr": "10.0.0.1", 01:00:16.865 "trsvcid": "39554", 01:00:16.865 "trtype": "TCP" 01:00:16.865 }, 01:00:16.865 "qid": 0, 01:00:16.865 "state": "enabled", 01:00:16.865 "thread": "nvmf_tgt_poll_group_000" 01:00:16.865 } 01:00:16.865 ]' 01:00:16.865 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:16.865 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:16.865 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:16.865 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:00:16.865 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:16.865 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:16.865 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:16.865 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:17.125 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:17.125 05:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:18.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:18.061 05:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:18.999 01:00:18.999 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:18.999 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:19.000 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:19.259 { 01:00:19.259 "auth": { 01:00:19.259 "dhgroup": "ffdhe8192", 01:00:19.259 "digest": "sha384", 01:00:19.259 "state": "completed" 01:00:19.259 }, 01:00:19.259 "cntlid": 93, 01:00:19.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:19.259 "listen_address": { 01:00:19.259 "adrfam": "IPv4", 01:00:19.259 "traddr": "10.0.0.3", 01:00:19.259 "trsvcid": "4420", 01:00:19.259 "trtype": "TCP" 01:00:19.259 }, 01:00:19.259 "peer_address": { 01:00:19.259 "adrfam": "IPv4", 01:00:19.259 "traddr": "10.0.0.1", 01:00:19.259 "trsvcid": "39592", 01:00:19.259 "trtype": "TCP" 01:00:19.259 }, 01:00:19.259 "qid": 0, 01:00:19.259 "state": "enabled", 01:00:19.259 "thread": "nvmf_tgt_poll_group_000" 01:00:19.259 } 01:00:19.259 ]' 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:19.259 05:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:19.528 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:19.528 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:20.212 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:20.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:20.471 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:20.471 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:20.471 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:20.471 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:20.471 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:20.471 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:00:20.471 05:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:20.729 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:21.295 01:00:21.295 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:21.295 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:21.295 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:21.554 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:21.554 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:21.554 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:21.554 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:21.554 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:21.554 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:21.554 { 01:00:21.554 "auth": { 01:00:21.554 "dhgroup": "ffdhe8192", 01:00:21.554 "digest": "sha384", 01:00:21.554 "state": "completed" 01:00:21.554 }, 01:00:21.554 "cntlid": 95, 01:00:21.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:21.554 "listen_address": { 01:00:21.554 "adrfam": "IPv4", 01:00:21.554 "traddr": "10.0.0.3", 01:00:21.554 "trsvcid": "4420", 01:00:21.554 "trtype": "TCP" 01:00:21.554 }, 01:00:21.554 "peer_address": { 01:00:21.554 "adrfam": "IPv4", 01:00:21.554 "traddr": "10.0.0.1", 01:00:21.554 "trsvcid": "56884", 01:00:21.554 "trtype": "TCP" 01:00:21.554 }, 01:00:21.554 "qid": 0, 01:00:21.554 "state": "enabled", 01:00:21.554 "thread": "nvmf_tgt_poll_group_000" 01:00:21.554 } 01:00:21.554 ]' 01:00:21.554 05:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:21.554 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 01:00:21.554 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:21.554 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:00:21.554 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:21.554 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:21.554 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:21.554 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:21.812 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:21.812 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:22.747 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:22.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:22.747 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:22.747 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:22.747 05:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:22.747 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:22.748 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:23.314 01:00:23.314 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:23.314 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:23.314 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:23.572 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:23.572 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:23.572 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:23.572 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:23.572 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:23.572 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:23.572 { 01:00:23.572 "auth": { 01:00:23.572 "dhgroup": "null", 01:00:23.572 "digest": "sha512", 01:00:23.572 "state": "completed" 01:00:23.572 }, 01:00:23.572 "cntlid": 97, 01:00:23.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:23.572 "listen_address": { 01:00:23.572 "adrfam": "IPv4", 01:00:23.572 "traddr": "10.0.0.3", 01:00:23.572 "trsvcid": "4420", 01:00:23.572 "trtype": "TCP" 01:00:23.572 }, 01:00:23.572 "peer_address": { 01:00:23.572 "adrfam": "IPv4", 01:00:23.572 "traddr": "10.0.0.1", 01:00:23.572 "trsvcid": "56898", 01:00:23.572 "trtype": "TCP" 01:00:23.572 }, 01:00:23.572 "qid": 0, 01:00:23.572 "state": "enabled", 01:00:23.572 "thread": "nvmf_tgt_poll_group_000" 01:00:23.572 } 01:00:23.572 ]' 01:00:23.572 05:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:23.572 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:23.572 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:23.572 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:00:23.572 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:23.572 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:23.572 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:23.572 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:23.830 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:23.830 05:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:24.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:24.762 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:25.020 01:00:25.020 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:25.020 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:25.020 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:25.278 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:25.278 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:25.278 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:25.278 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:25.537 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:25.537 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:25.537 { 01:00:25.537 "auth": { 01:00:25.537 "dhgroup": "null", 01:00:25.537 "digest": "sha512", 01:00:25.537 "state": "completed" 01:00:25.537 }, 01:00:25.537 "cntlid": 99, 01:00:25.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:25.537 "listen_address": { 01:00:25.537 "adrfam": "IPv4", 01:00:25.537 "traddr": "10.0.0.3", 01:00:25.537 "trsvcid": "4420", 01:00:25.537 "trtype": "TCP" 01:00:25.537 }, 01:00:25.537 "peer_address": { 01:00:25.537 "adrfam": "IPv4", 01:00:25.537 "traddr": "10.0.0.1", 01:00:25.537 "trsvcid": "56922", 01:00:25.537 "trtype": "TCP" 01:00:25.537 }, 01:00:25.537 "qid": 0, 01:00:25.537 "state": "enabled", 01:00:25.537 "thread": "nvmf_tgt_poll_group_000" 01:00:25.537 } 01:00:25.537 ]' 01:00:25.537 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:25.537 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:25.537 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:25.537 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:00:25.537 05:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:25.537 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:25.537 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:25.537 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:25.795 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:25.795 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:26.362 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:26.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:26.362 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:26.362 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:26.362 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:26.362 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:26.362 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:26.362 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:00:26.362 05:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:26.621 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:26.879 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:26.879 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:26.879 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:26.879 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:27.138 01:00:27.138 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:27.138 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:27.138 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:27.397 { 01:00:27.397 "auth": { 01:00:27.397 "dhgroup": "null", 01:00:27.397 "digest": "sha512", 01:00:27.397 "state": "completed" 01:00:27.397 }, 01:00:27.397 "cntlid": 101, 01:00:27.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:27.397 "listen_address": { 01:00:27.397 "adrfam": "IPv4", 01:00:27.397 "traddr": "10.0.0.3", 01:00:27.397 "trsvcid": "4420", 01:00:27.397 "trtype": "TCP" 01:00:27.397 }, 01:00:27.397 "peer_address": { 01:00:27.397 "adrfam": "IPv4", 01:00:27.397 "traddr": "10.0.0.1", 01:00:27.397 "trsvcid": "56948", 01:00:27.397 "trtype": "TCP" 01:00:27.397 }, 01:00:27.397 "qid": 0, 01:00:27.397 "state": "enabled", 01:00:27.397 "thread": "nvmf_tgt_poll_group_000" 01:00:27.397 } 01:00:27.397 ]' 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:27.397 05:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:27.656 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:27.656 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:28.225 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:28.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:28.483 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:28.483 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:28.484 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:28.484 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:28.484 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:28.484 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:00:28.484 05:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:28.742 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:29.001 01:00:29.001 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:29.001 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:29.001 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:29.260 { 01:00:29.260 "auth": { 01:00:29.260 "dhgroup": "null", 01:00:29.260 "digest": "sha512", 01:00:29.260 "state": "completed" 01:00:29.260 }, 01:00:29.260 "cntlid": 103, 01:00:29.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:29.260 "listen_address": { 01:00:29.260 "adrfam": "IPv4", 01:00:29.260 "traddr": "10.0.0.3", 01:00:29.260 "trsvcid": "4420", 01:00:29.260 "trtype": "TCP" 01:00:29.260 }, 01:00:29.260 "peer_address": { 01:00:29.260 "adrfam": "IPv4", 01:00:29.260 "traddr": "10.0.0.1", 01:00:29.260 "trsvcid": "56966", 01:00:29.260 "trtype": "TCP" 01:00:29.260 }, 01:00:29.260 "qid": 0, 01:00:29.260 "state": "enabled", 01:00:29.260 "thread": "nvmf_tgt_poll_group_000" 01:00:29.260 } 01:00:29.260 ]' 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 01:00:29.260 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:29.519 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:29.519 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:29.519 05:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:29.778 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:29.778 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:30.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:00:30.346 05:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:30.609 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:30.871 01:00:31.129 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:31.129 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:31.129 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:31.387 { 01:00:31.387 "auth": { 01:00:31.387 "dhgroup": "ffdhe2048", 01:00:31.387 "digest": "sha512", 01:00:31.387 "state": "completed" 01:00:31.387 }, 01:00:31.387 "cntlid": 105, 01:00:31.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:31.387 "listen_address": { 01:00:31.387 "adrfam": "IPv4", 01:00:31.387 "traddr": "10.0.0.3", 01:00:31.387 "trsvcid": "4420", 01:00:31.387 "trtype": "TCP" 01:00:31.387 }, 01:00:31.387 "peer_address": { 01:00:31.387 "adrfam": "IPv4", 01:00:31.387 "traddr": "10.0.0.1", 01:00:31.387 "trsvcid": "50154", 01:00:31.387 "trtype": "TCP" 01:00:31.387 }, 01:00:31.387 "qid": 0, 01:00:31.387 "state": "enabled", 01:00:31.387 "thread": "nvmf_tgt_poll_group_000" 01:00:31.387 } 01:00:31.387 ]' 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:31.387 05:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:31.646 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:31.646 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:32.583 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:32.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:32.583 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:32.583 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:32.583 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:32.583 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:32.583 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:32.583 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:00:32.583 05:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:32.583 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:32.843 01:00:32.843 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:32.843 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:32.843 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:33.103 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:33.103 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:33.103 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:33.103 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:33.103 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:33.103 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:33.103 { 01:00:33.103 "auth": { 01:00:33.103 "dhgroup": "ffdhe2048", 01:00:33.103 "digest": "sha512", 01:00:33.103 "state": "completed" 01:00:33.103 }, 01:00:33.103 "cntlid": 107, 01:00:33.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:33.103 "listen_address": { 01:00:33.103 "adrfam": "IPv4", 01:00:33.103 "traddr": "10.0.0.3", 01:00:33.103 "trsvcid": "4420", 01:00:33.103 "trtype": "TCP" 01:00:33.103 }, 01:00:33.103 "peer_address": { 01:00:33.103 "adrfam": "IPv4", 01:00:33.103 "traddr": "10.0.0.1", 01:00:33.103 "trsvcid": "50180", 01:00:33.103 "trtype": "TCP" 01:00:33.103 }, 01:00:33.103 "qid": 0, 01:00:33.103 "state": "enabled", 01:00:33.103 "thread": "nvmf_tgt_poll_group_000" 01:00:33.103 } 01:00:33.103 ]' 01:00:33.103 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:33.363 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:33.363 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:33.363 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:00:33.363 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:33.363 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:33.363 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:33.363 05:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:33.622 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:33.622 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:34.189 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:34.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:34.189 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:34.189 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:34.189 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:34.189 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:34.189 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:34.189 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:00:34.189 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:34.447 05:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:35.013 01:00:35.013 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:35.013 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:35.013 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:35.272 { 01:00:35.272 "auth": { 01:00:35.272 "dhgroup": "ffdhe2048", 01:00:35.272 "digest": "sha512", 01:00:35.272 "state": "completed" 01:00:35.272 }, 01:00:35.272 "cntlid": 109, 01:00:35.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:35.272 "listen_address": { 01:00:35.272 "adrfam": "IPv4", 01:00:35.272 "traddr": "10.0.0.3", 01:00:35.272 "trsvcid": "4420", 01:00:35.272 "trtype": "TCP" 01:00:35.272 }, 01:00:35.272 "peer_address": { 01:00:35.272 "adrfam": "IPv4", 01:00:35.272 "traddr": "10.0.0.1", 01:00:35.272 "trsvcid": "50202", 01:00:35.272 "trtype": "TCP" 01:00:35.272 }, 01:00:35.272 "qid": 0, 01:00:35.272 "state": "enabled", 01:00:35.272 "thread": "nvmf_tgt_poll_group_000" 01:00:35.272 } 01:00:35.272 ]' 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:35.272 05:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:35.531 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:35.531 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:36.468 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:36.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:36.468 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:36.468 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:36.468 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:36.468 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:36.468 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:36.468 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:00:36.468 05:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:36.726 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:36.984 01:00:36.984 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:36.984 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:36.984 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:37.243 { 01:00:37.243 "auth": { 01:00:37.243 "dhgroup": "ffdhe2048", 01:00:37.243 "digest": "sha512", 01:00:37.243 "state": "completed" 01:00:37.243 }, 01:00:37.243 "cntlid": 111, 01:00:37.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:37.243 "listen_address": { 01:00:37.243 "adrfam": "IPv4", 01:00:37.243 "traddr": "10.0.0.3", 01:00:37.243 "trsvcid": "4420", 01:00:37.243 "trtype": "TCP" 01:00:37.243 }, 01:00:37.243 "peer_address": { 01:00:37.243 "adrfam": "IPv4", 01:00:37.243 "traddr": "10.0.0.1", 01:00:37.243 "trsvcid": "50228", 01:00:37.243 "trtype": "TCP" 01:00:37.243 }, 01:00:37.243 "qid": 0, 01:00:37.243 "state": "enabled", 01:00:37.243 "thread": "nvmf_tgt_poll_group_000" 01:00:37.243 } 01:00:37.243 ]' 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:37.243 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:37.502 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 01:00:37.502 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:37.502 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:37.502 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:37.502 05:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:37.760 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:37.760 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:38.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:00:38.328 05:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:38.587 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:39.156 01:00:39.156 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:39.156 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:39.156 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:39.414 { 01:00:39.414 "auth": { 01:00:39.414 "dhgroup": "ffdhe3072", 01:00:39.414 "digest": "sha512", 01:00:39.414 "state": "completed" 01:00:39.414 }, 01:00:39.414 "cntlid": 113, 01:00:39.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:39.414 "listen_address": { 01:00:39.414 "adrfam": "IPv4", 01:00:39.414 "traddr": "10.0.0.3", 01:00:39.414 "trsvcid": "4420", 01:00:39.414 "trtype": "TCP" 01:00:39.414 }, 01:00:39.414 "peer_address": { 01:00:39.414 "adrfam": "IPv4", 01:00:39.414 "traddr": "10.0.0.1", 01:00:39.414 "trsvcid": "50244", 01:00:39.414 "trtype": "TCP" 01:00:39.414 }, 01:00:39.414 "qid": 0, 01:00:39.414 "state": "enabled", 01:00:39.414 "thread": "nvmf_tgt_poll_group_000" 01:00:39.414 } 01:00:39.414 ]' 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:39.414 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:39.415 05:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:39.673 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:39.673 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:40.241 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:40.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:40.241 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:40.241 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:40.241 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:40.241 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:40.241 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:40.241 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:00:40.241 05:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:40.809 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:41.068 01:00:41.068 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:41.068 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:41.068 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:41.327 { 01:00:41.327 "auth": { 01:00:41.327 "dhgroup": "ffdhe3072", 01:00:41.327 "digest": "sha512", 01:00:41.327 "state": "completed" 01:00:41.327 }, 01:00:41.327 "cntlid": 115, 01:00:41.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:41.327 "listen_address": { 01:00:41.327 "adrfam": "IPv4", 01:00:41.327 "traddr": "10.0.0.3", 01:00:41.327 "trsvcid": "4420", 01:00:41.327 "trtype": "TCP" 01:00:41.327 }, 01:00:41.327 "peer_address": { 01:00:41.327 "adrfam": "IPv4", 01:00:41.327 "traddr": "10.0.0.1", 01:00:41.327 "trsvcid": "49384", 01:00:41.327 "trtype": "TCP" 01:00:41.327 }, 01:00:41.327 "qid": 0, 01:00:41.327 "state": "enabled", 01:00:41.327 "thread": "nvmf_tgt_poll_group_000" 01:00:41.327 } 01:00:41.327 ]' 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:41.327 05:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:41.585 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:41.585 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:42.534 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:42.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:42.534 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:42.534 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:42.534 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:42.534 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:42.534 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:42.534 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:00:42.534 05:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:42.534 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:43.102 01:00:43.102 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:43.102 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:43.102 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:43.362 { 01:00:43.362 "auth": { 01:00:43.362 "dhgroup": "ffdhe3072", 01:00:43.362 "digest": "sha512", 01:00:43.362 "state": "completed" 01:00:43.362 }, 01:00:43.362 "cntlid": 117, 01:00:43.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:43.362 "listen_address": { 01:00:43.362 "adrfam": "IPv4", 01:00:43.362 "traddr": "10.0.0.3", 01:00:43.362 "trsvcid": "4420", 01:00:43.362 "trtype": "TCP" 01:00:43.362 }, 01:00:43.362 "peer_address": { 01:00:43.362 "adrfam": "IPv4", 01:00:43.362 "traddr": "10.0.0.1", 01:00:43.362 "trsvcid": "49400", 01:00:43.362 "trtype": "TCP" 01:00:43.362 }, 01:00:43.362 "qid": 0, 01:00:43.362 "state": "enabled", 01:00:43.362 "thread": "nvmf_tgt_poll_group_000" 01:00:43.362 } 01:00:43.362 ]' 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:43.362 05:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:43.622 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:43.622 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:44.559 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:44.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:44.559 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:44.559 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:44.559 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:44.559 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:44.559 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:44.559 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:00:44.559 05:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:44.818 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:45.077 01:00:45.077 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:45.077 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:45.077 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:45.336 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:45.336 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:45.336 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:45.336 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:45.336 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:45.336 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:45.336 { 01:00:45.336 "auth": { 01:00:45.336 "dhgroup": "ffdhe3072", 01:00:45.336 "digest": "sha512", 01:00:45.336 "state": "completed" 01:00:45.336 }, 01:00:45.336 "cntlid": 119, 01:00:45.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:45.336 "listen_address": { 01:00:45.336 "adrfam": "IPv4", 01:00:45.336 "traddr": "10.0.0.3", 01:00:45.336 "trsvcid": "4420", 01:00:45.336 "trtype": "TCP" 01:00:45.336 }, 01:00:45.336 "peer_address": { 01:00:45.336 "adrfam": "IPv4", 01:00:45.336 "traddr": "10.0.0.1", 01:00:45.336 "trsvcid": "49418", 01:00:45.336 "trtype": "TCP" 01:00:45.336 }, 01:00:45.336 "qid": 0, 01:00:45.336 "state": "enabled", 01:00:45.336 "thread": "nvmf_tgt_poll_group_000" 01:00:45.337 } 01:00:45.337 ]' 01:00:45.337 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:45.337 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:45.337 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:45.596 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 01:00:45.596 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:45.596 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:45.596 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:45.596 05:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:45.854 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:45.854 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:46.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:00:46.421 05:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:46.680 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:47.248 01:00:47.248 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:47.248 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:47.248 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:47.507 { 01:00:47.507 "auth": { 01:00:47.507 "dhgroup": "ffdhe4096", 01:00:47.507 "digest": "sha512", 01:00:47.507 "state": "completed" 01:00:47.507 }, 01:00:47.507 "cntlid": 121, 01:00:47.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:47.507 "listen_address": { 01:00:47.507 "adrfam": "IPv4", 01:00:47.507 "traddr": "10.0.0.3", 01:00:47.507 "trsvcid": "4420", 01:00:47.507 "trtype": "TCP" 01:00:47.507 }, 01:00:47.507 "peer_address": { 01:00:47.507 "adrfam": "IPv4", 01:00:47.507 "traddr": "10.0.0.1", 01:00:47.507 "trsvcid": "49444", 01:00:47.507 "trtype": "TCP" 01:00:47.507 }, 01:00:47.507 "qid": 0, 01:00:47.507 "state": "enabled", 01:00:47.507 "thread": "nvmf_tgt_poll_group_000" 01:00:47.507 } 01:00:47.507 ]' 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:00:47.507 05:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:47.507 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:47.507 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:47.507 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:47.766 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:47.766 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:48.738 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:48.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:48.738 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:48.738 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:48.738 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:48.738 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:48.738 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:48.738 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:00:48.738 05:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:48.738 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:49.020 01:00:49.308 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:49.308 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:49.308 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:49.576 { 01:00:49.576 "auth": { 01:00:49.576 "dhgroup": "ffdhe4096", 01:00:49.576 "digest": "sha512", 01:00:49.576 "state": "completed" 01:00:49.576 }, 01:00:49.576 "cntlid": 123, 01:00:49.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:49.576 "listen_address": { 01:00:49.576 "adrfam": "IPv4", 01:00:49.576 "traddr": "10.0.0.3", 01:00:49.576 "trsvcid": "4420", 01:00:49.576 "trtype": "TCP" 01:00:49.576 }, 01:00:49.576 "peer_address": { 01:00:49.576 "adrfam": "IPv4", 01:00:49.576 "traddr": "10.0.0.1", 01:00:49.576 "trsvcid": "49474", 01:00:49.576 "trtype": "TCP" 01:00:49.576 }, 01:00:49.576 "qid": 0, 01:00:49.576 "state": "enabled", 01:00:49.576 "thread": "nvmf_tgt_poll_group_000" 01:00:49.576 } 01:00:49.576 ]' 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:49.576 05:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:49.577 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:00:49.577 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:49.577 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:49.577 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:49.577 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:49.835 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:49.835 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:50.402 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:50.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:50.402 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:50.402 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:50.402 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:50.402 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:50.402 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:50.402 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:00:50.402 05:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:50.661 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:50.920 01:00:51.178 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:51.178 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:51.178 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:51.436 { 01:00:51.436 "auth": { 01:00:51.436 "dhgroup": "ffdhe4096", 01:00:51.436 "digest": "sha512", 01:00:51.436 "state": "completed" 01:00:51.436 }, 01:00:51.436 "cntlid": 125, 01:00:51.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:51.436 "listen_address": { 01:00:51.436 "adrfam": "IPv4", 01:00:51.436 "traddr": "10.0.0.3", 01:00:51.436 "trsvcid": "4420", 01:00:51.436 "trtype": "TCP" 01:00:51.436 }, 01:00:51.436 "peer_address": { 01:00:51.436 "adrfam": "IPv4", 01:00:51.436 "traddr": "10.0.0.1", 01:00:51.436 "trsvcid": "55618", 01:00:51.436 "trtype": "TCP" 01:00:51.436 }, 01:00:51.436 "qid": 0, 01:00:51.436 "state": "enabled", 01:00:51.436 "thread": "nvmf_tgt_poll_group_000" 01:00:51.436 } 01:00:51.436 ]' 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:51.436 05:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:52.001 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:52.001 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:00:52.567 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:52.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:52.567 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:52.567 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:52.567 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:52.567 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:52.567 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:52.567 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:00:52.567 05:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:52.825 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:00:53.083 01:00:53.083 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:53.083 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:53.083 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:53.342 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:53.342 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:53.342 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:53.342 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:53.342 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:53.342 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:53.342 { 01:00:53.342 "auth": { 01:00:53.342 "dhgroup": "ffdhe4096", 01:00:53.342 "digest": "sha512", 01:00:53.342 "state": "completed" 01:00:53.342 }, 01:00:53.342 "cntlid": 127, 01:00:53.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:53.342 "listen_address": { 01:00:53.342 "adrfam": "IPv4", 01:00:53.342 "traddr": "10.0.0.3", 01:00:53.342 "trsvcid": "4420", 01:00:53.342 "trtype": "TCP" 01:00:53.342 }, 01:00:53.342 "peer_address": { 01:00:53.342 "adrfam": "IPv4", 01:00:53.342 "traddr": "10.0.0.1", 01:00:53.342 "trsvcid": "55632", 01:00:53.342 "trtype": "TCP" 01:00:53.342 }, 01:00:53.342 "qid": 0, 01:00:53.342 "state": "enabled", 01:00:53.342 "thread": "nvmf_tgt_poll_group_000" 01:00:53.342 } 01:00:53.342 ]' 01:00:53.342 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:53.601 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:53.601 05:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:53.601 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 01:00:53.601 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:53.601 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:53.601 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:53.601 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:53.861 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:53.861 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:54.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:00:54.431 05:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:54.691 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:00:55.260 01:00:55.260 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:55.260 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:55.260 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:55.519 { 01:00:55.519 "auth": { 01:00:55.519 "dhgroup": "ffdhe6144", 01:00:55.519 "digest": "sha512", 01:00:55.519 "state": "completed" 01:00:55.519 }, 01:00:55.519 "cntlid": 129, 01:00:55.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:55.519 "listen_address": { 01:00:55.519 "adrfam": "IPv4", 01:00:55.519 "traddr": "10.0.0.3", 01:00:55.519 "trsvcid": "4420", 01:00:55.519 "trtype": "TCP" 01:00:55.519 }, 01:00:55.519 "peer_address": { 01:00:55.519 "adrfam": "IPv4", 01:00:55.519 "traddr": "10.0.0.1", 01:00:55.519 "trsvcid": "55658", 01:00:55.519 "trtype": "TCP" 01:00:55.519 }, 01:00:55.519 "qid": 0, 01:00:55.519 "state": "enabled", 01:00:55.519 "thread": "nvmf_tgt_poll_group_000" 01:00:55.519 } 01:00:55.519 ]' 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:00:55.519 05:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:55.519 05:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:55.519 05:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:55.519 05:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:56.088 05:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:56.088 05:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:00:56.655 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:56.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:56.655 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:56.655 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:56.655 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:56.655 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:56.655 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:56.655 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:00:56.655 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:56.914 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:00:57.479 01:00:57.479 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:57.479 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:57.479 05:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:57.737 { 01:00:57.737 "auth": { 01:00:57.737 "dhgroup": "ffdhe6144", 01:00:57.737 "digest": "sha512", 01:00:57.737 "state": "completed" 01:00:57.737 }, 01:00:57.737 "cntlid": 131, 01:00:57.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:57.737 "listen_address": { 01:00:57.737 "adrfam": "IPv4", 01:00:57.737 "traddr": "10.0.0.3", 01:00:57.737 "trsvcid": "4420", 01:00:57.737 "trtype": "TCP" 01:00:57.737 }, 01:00:57.737 "peer_address": { 01:00:57.737 "adrfam": "IPv4", 01:00:57.737 "traddr": "10.0.0.1", 01:00:57.737 "trsvcid": "55696", 01:00:57.737 "trtype": "TCP" 01:00:57.737 }, 01:00:57.737 "qid": 0, 01:00:57.737 "state": "enabled", 01:00:57.737 "thread": "nvmf_tgt_poll_group_000" 01:00:57.737 } 01:00:57.737 ]' 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:00:57.737 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:00:57.994 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:57.994 05:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:00:58.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:58.928 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:00:59.495 01:00:59.495 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:00:59.495 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:00:59.495 05:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:00:59.754 { 01:00:59.754 "auth": { 01:00:59.754 "dhgroup": "ffdhe6144", 01:00:59.754 "digest": "sha512", 01:00:59.754 "state": "completed" 01:00:59.754 }, 01:00:59.754 "cntlid": 133, 01:00:59.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:00:59.754 "listen_address": { 01:00:59.754 "adrfam": "IPv4", 01:00:59.754 "traddr": "10.0.0.3", 01:00:59.754 "trsvcid": "4420", 01:00:59.754 "trtype": "TCP" 01:00:59.754 }, 01:00:59.754 "peer_address": { 01:00:59.754 "adrfam": "IPv4", 01:00:59.754 "traddr": "10.0.0.1", 01:00:59.754 "trsvcid": "55718", 01:00:59.754 "trtype": "TCP" 01:00:59.754 }, 01:00:59.754 "qid": 0, 01:00:59.754 "state": "enabled", 01:00:59.754 "thread": "nvmf_tgt_poll_group_000" 01:00:59.754 } 01:00:59.754 ]' 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:00:59.754 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:01:00.012 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:01:00.012 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:00.012 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:00.012 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:01:00.012 05:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:01:00.578 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:00.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:00.578 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:00.578 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:00.578 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:00.578 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:00.835 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:01:00.835 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:01:00.835 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:01.093 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:01.661 01:01:01.661 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:01:01.661 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:01.661 05:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:01:01.921 { 01:01:01.921 "auth": { 01:01:01.921 "dhgroup": "ffdhe6144", 01:01:01.921 "digest": "sha512", 01:01:01.921 "state": "completed" 01:01:01.921 }, 01:01:01.921 "cntlid": 135, 01:01:01.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:01.921 "listen_address": { 01:01:01.921 "adrfam": "IPv4", 01:01:01.921 "traddr": "10.0.0.3", 01:01:01.921 "trsvcid": "4420", 01:01:01.921 "trtype": "TCP" 01:01:01.921 }, 01:01:01.921 "peer_address": { 01:01:01.921 "adrfam": "IPv4", 01:01:01.921 "traddr": "10.0.0.1", 01:01:01.921 "trsvcid": "45260", 01:01:01.921 "trtype": "TCP" 01:01:01.921 }, 01:01:01.921 "qid": 0, 01:01:01.921 "state": "enabled", 01:01:01.921 "thread": "nvmf_tgt_poll_group_000" 01:01:01.921 } 01:01:01.921 ]' 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:01.921 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:02.181 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:01:02.181 05:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:02.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:01:02.750 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:01:03.009 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 01:01:03.009 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:01:03.009 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:01:03.009 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:01:03.009 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:01:03.009 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:01:03.010 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:01:03.010 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:03.010 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:03.010 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:03.010 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:01:03.010 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:01:03.010 05:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:01:03.579 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:01:03.839 { 01:01:03.839 "auth": { 01:01:03.839 "dhgroup": "ffdhe8192", 01:01:03.839 "digest": "sha512", 01:01:03.839 "state": "completed" 01:01:03.839 }, 01:01:03.839 "cntlid": 137, 01:01:03.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:03.839 "listen_address": { 01:01:03.839 "adrfam": "IPv4", 01:01:03.839 "traddr": "10.0.0.3", 01:01:03.839 "trsvcid": "4420", 01:01:03.839 "trtype": "TCP" 01:01:03.839 }, 01:01:03.839 "peer_address": { 01:01:03.839 "adrfam": "IPv4", 01:01:03.839 "traddr": "10.0.0.1", 01:01:03.839 "trsvcid": "45304", 01:01:03.839 "trtype": "TCP" 01:01:03.839 }, 01:01:03.839 "qid": 0, 01:01:03.839 "state": "enabled", 01:01:03.839 "thread": "nvmf_tgt_poll_group_000" 01:01:03.839 } 01:01:03.839 ]' 01:01:03.839 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:01:04.098 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:01:04.098 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:01:04.098 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:01:04.098 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:01:04.098 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:01:04.098 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:04.098 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:04.359 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:01:04.359 05:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:01:04.929 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:04.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:04.929 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:04.929 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:04.929 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:04.929 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:04.929 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:01:04.929 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:01:04.929 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:05.188 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:05.189 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:05.189 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:05.189 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:05.189 05:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:05.756 01:01:06.015 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:01:06.016 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:01:06.016 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:01:06.275 { 01:01:06.275 "auth": { 01:01:06.275 "dhgroup": "ffdhe8192", 01:01:06.275 "digest": "sha512", 01:01:06.275 "state": "completed" 01:01:06.275 }, 01:01:06.275 "cntlid": 139, 01:01:06.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:06.275 "listen_address": { 01:01:06.275 "adrfam": "IPv4", 01:01:06.275 "traddr": "10.0.0.3", 01:01:06.275 "trsvcid": "4420", 01:01:06.275 "trtype": "TCP" 01:01:06.275 }, 01:01:06.275 "peer_address": { 01:01:06.275 "adrfam": "IPv4", 01:01:06.275 "traddr": "10.0.0.1", 01:01:06.275 "trsvcid": "45332", 01:01:06.275 "trtype": "TCP" 01:01:06.275 }, 01:01:06.275 "qid": 0, 01:01:06.275 "state": "enabled", 01:01:06.275 "thread": "nvmf_tgt_poll_group_000" 01:01:06.275 } 01:01:06.275 ]' 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:06.275 06:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:06.534 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:01:06.534 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: --dhchap-ctrl-secret DHHC-1:02:NTI2OWU0NjU4YzJkN2NlZmViMjYwMTc3NzVmM2JlZTJlZTUzYzU2ZDI4NWEzMTky3QfnSA==: 01:01:07.474 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:07.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:07.474 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:07.474 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:07.474 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:07.474 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:07.474 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:01:07.474 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:01:07.474 06:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:01:07.733 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:01:08.302 01:01:08.302 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:01:08.302 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:08.302 06:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:01:08.561 { 01:01:08.561 "auth": { 01:01:08.561 "dhgroup": "ffdhe8192", 01:01:08.561 "digest": "sha512", 01:01:08.561 "state": "completed" 01:01:08.561 }, 01:01:08.561 "cntlid": 141, 01:01:08.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:08.561 "listen_address": { 01:01:08.561 "adrfam": "IPv4", 01:01:08.561 "traddr": "10.0.0.3", 01:01:08.561 "trsvcid": "4420", 01:01:08.561 "trtype": "TCP" 01:01:08.561 }, 01:01:08.561 "peer_address": { 01:01:08.561 "adrfam": "IPv4", 01:01:08.561 "traddr": "10.0.0.1", 01:01:08.561 "trsvcid": "45358", 01:01:08.561 "trtype": "TCP" 01:01:08.561 }, 01:01:08.561 "qid": 0, 01:01:08.561 "state": "enabled", 01:01:08.561 "thread": "nvmf_tgt_poll_group_000" 01:01:08.561 } 01:01:08.561 ]' 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:01:08.561 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:01:08.821 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:01:08.821 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:08.821 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:09.081 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:01:09.081 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:01:MDkyZjg5MjQyM2U5MzlkNzAwNjBiYjg5NzFlODgzZjD4qisg: 01:01:09.650 06:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:09.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:09.650 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:09.650 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:09.650 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:09.650 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:09.650 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 01:01:09.650 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:01:09.650 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:09.909 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:10.477 01:01:10.477 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:01:10.477 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:10.477 06:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:01:10.735 { 01:01:10.735 "auth": { 01:01:10.735 "dhgroup": "ffdhe8192", 01:01:10.735 "digest": "sha512", 01:01:10.735 "state": "completed" 01:01:10.735 }, 01:01:10.735 "cntlid": 143, 01:01:10.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:10.735 "listen_address": { 01:01:10.735 "adrfam": "IPv4", 01:01:10.735 "traddr": "10.0.0.3", 01:01:10.735 "trsvcid": "4420", 01:01:10.735 "trtype": "TCP" 01:01:10.735 }, 01:01:10.735 "peer_address": { 01:01:10.735 "adrfam": "IPv4", 01:01:10.735 "traddr": "10.0.0.1", 01:01:10.735 "trsvcid": "45378", 01:01:10.735 "trtype": "TCP" 01:01:10.735 }, 01:01:10.735 "qid": 0, 01:01:10.735 "state": "enabled", 01:01:10.735 "thread": "nvmf_tgt_poll_group_000" 01:01:10.735 } 01:01:10.735 ]' 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:01:10.735 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:01:10.994 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:01:10.994 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:10.994 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:11.253 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:01:11.253 06:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:11.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 01:01:11.820 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:01:11.821 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:01:11.821 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:01:12.079 06:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:01:12.701 01:01:12.701 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:01:12.701 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:12.701 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:01:12.959 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:12.959 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:01:12.959 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:12.959 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:12.959 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:12.959 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:01:12.960 { 01:01:12.960 "auth": { 01:01:12.960 "dhgroup": "ffdhe8192", 01:01:12.960 "digest": "sha512", 01:01:12.960 "state": "completed" 01:01:12.960 }, 01:01:12.960 "cntlid": 145, 01:01:12.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:12.960 "listen_address": { 01:01:12.960 "adrfam": "IPv4", 01:01:12.960 "traddr": "10.0.0.3", 01:01:12.960 "trsvcid": "4420", 01:01:12.960 "trtype": "TCP" 01:01:12.960 }, 01:01:12.960 "peer_address": { 01:01:12.960 "adrfam": "IPv4", 01:01:12.960 "traddr": "10.0.0.1", 01:01:12.960 "trsvcid": "58994", 01:01:12.960 "trtype": "TCP" 01:01:12.960 }, 01:01:12.960 "qid": 0, 01:01:12.960 "state": "enabled", 01:01:12.960 "thread": "nvmf_tgt_poll_group_000" 01:01:12.960 } 01:01:12.960 ]' 01:01:12.960 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:01:12.960 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:01:12.960 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:01:12.960 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:01:12.960 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:01:12.960 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:01:12.960 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:12.960 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:13.217 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:01:13.217 06:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:00:ODIyNzU3YWZjM2E3NWNhMmViZmMwNDU5ZDliZTA5NjMxOTEwZWQ2YzgxZTkxNWQ2HXRR9Q==: --dhchap-ctrl-secret DHHC-1:03:Mjg3NjViMmU2M2FkYzBmNGRmMGE0ZjQzMzIxMmQxN2I1MmMxZGMzNGRjMDY0YzE5MGQ1M2Q2ZTg2YzIxMmMyZASR8Lc=: 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:13.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 01:01:13.783 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:01:14.041 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:14.042 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:01:14.042 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:14.042 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 01:01:14.042 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 01:01:14.042 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 01:01:14.610 2024/12/09 06:00:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:01:14.610 request: 01:01:14.610 { 01:01:14.610 "method": "bdev_nvme_attach_controller", 01:01:14.610 "params": { 01:01:14.610 "name": "nvme0", 01:01:14.610 "trtype": "tcp", 01:01:14.610 "traddr": "10.0.0.3", 01:01:14.610 "adrfam": "ipv4", 01:01:14.610 "trsvcid": "4420", 01:01:14.610 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:01:14.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:14.610 "prchk_reftag": false, 01:01:14.610 "prchk_guard": false, 01:01:14.610 "hdgst": false, 01:01:14.610 "ddgst": false, 01:01:14.610 "dhchap_key": "key2", 01:01:14.610 "allow_unrecognized_csi": false 01:01:14.610 } 01:01:14.610 } 01:01:14.610 Got JSON-RPC error response 01:01:14.610 GoRPCClient: error on JSON-RPC call 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:01:14.610 06:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:01:15.177 2024/12/09 06:00:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:01:15.177 request: 01:01:15.177 { 01:01:15.177 "method": "bdev_nvme_attach_controller", 01:01:15.177 "params": { 01:01:15.177 "name": "nvme0", 01:01:15.177 "trtype": "tcp", 01:01:15.177 "traddr": "10.0.0.3", 01:01:15.177 "adrfam": "ipv4", 01:01:15.177 "trsvcid": "4420", 01:01:15.177 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:01:15.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:15.177 "prchk_reftag": false, 01:01:15.177 "prchk_guard": false, 01:01:15.177 "hdgst": false, 01:01:15.177 "ddgst": false, 01:01:15.177 "dhchap_key": "key1", 01:01:15.177 "dhchap_ctrlr_key": "ckey2", 01:01:15.177 "allow_unrecognized_csi": false 01:01:15.177 } 01:01:15.177 } 01:01:15.177 Got JSON-RPC error response 01:01:15.177 GoRPCClient: error on JSON-RPC call 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:15.177 06:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:01:15.745 2024/12/09 06:00:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:01:15.745 request: 01:01:15.745 { 01:01:15.745 "method": "bdev_nvme_attach_controller", 01:01:15.745 "params": { 01:01:15.745 "name": "nvme0", 01:01:15.745 "trtype": "tcp", 01:01:15.745 "traddr": "10.0.0.3", 01:01:15.745 "adrfam": "ipv4", 01:01:15.745 "trsvcid": "4420", 01:01:15.745 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:01:15.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:15.745 "prchk_reftag": false, 01:01:15.745 "prchk_guard": false, 01:01:15.745 "hdgst": false, 01:01:15.745 "ddgst": false, 01:01:15.745 "dhchap_key": "key1", 01:01:15.745 "dhchap_ctrlr_key": "ckey1", 01:01:15.745 "allow_unrecognized_csi": false 01:01:15.745 } 01:01:15.745 } 01:01:15.745 Got JSON-RPC error response 01:01:15.745 GoRPCClient: error on JSON-RPC call 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 75982 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 75982 ']' 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 75982 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75982 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:15.745 killing process with pid 75982 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75982' 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 75982 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 75982 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=80818 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 80818 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80818 ']' 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:15.745 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 80818 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80818 ']' 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:16.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:16.314 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:16.574 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 01:01:16.574 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 01:01:16.574 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 null0 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aHB 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.7fi ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7fi 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nUT 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.se5 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.se5 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ib1 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.i3i ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.i3i 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8gG 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:16.574 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 01:01:16.575 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:16.575 06:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:17.511 nvme0n1 01:01:17.511 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 01:01:17.511 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 01:01:17.511 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 01:01:18.080 { 01:01:18.080 "auth": { 01:01:18.080 "dhgroup": "ffdhe8192", 01:01:18.080 "digest": "sha512", 01:01:18.080 "state": "completed" 01:01:18.080 }, 01:01:18.080 "cntlid": 1, 01:01:18.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:18.080 "listen_address": { 01:01:18.080 "adrfam": "IPv4", 01:01:18.080 "traddr": "10.0.0.3", 01:01:18.080 "trsvcid": "4420", 01:01:18.080 "trtype": "TCP" 01:01:18.080 }, 01:01:18.080 "peer_address": { 01:01:18.080 "adrfam": "IPv4", 01:01:18.080 "traddr": "10.0.0.1", 01:01:18.080 "trsvcid": "59036", 01:01:18.080 "trtype": "TCP" 01:01:18.080 }, 01:01:18.080 "qid": 0, 01:01:18.080 "state": "enabled", 01:01:18.080 "thread": "nvmf_tgt_poll_group_000" 01:01:18.080 } 01:01:18.080 ]' 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:18.080 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:18.354 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:01:18.355 06:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:01:18.976 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:18.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:18.976 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:18.976 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:18.976 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:18.976 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:18.977 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key3 01:01:18.977 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:18.977 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:18.977 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:18.977 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 01:01:18.977 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:19.244 06:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:19.503 2024/12/09 06:00:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:01:19.503 request: 01:01:19.503 { 01:01:19.503 "method": "bdev_nvme_attach_controller", 01:01:19.503 "params": { 01:01:19.503 "name": "nvme0", 01:01:19.503 "trtype": "tcp", 01:01:19.503 "traddr": "10.0.0.3", 01:01:19.503 "adrfam": "ipv4", 01:01:19.503 "trsvcid": "4420", 01:01:19.503 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:01:19.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:19.503 "prchk_reftag": false, 01:01:19.504 "prchk_guard": false, 01:01:19.504 "hdgst": false, 01:01:19.504 "ddgst": false, 01:01:19.504 "dhchap_key": "key3", 01:01:19.504 "allow_unrecognized_csi": false 01:01:19.504 } 01:01:19.504 } 01:01:19.504 Got JSON-RPC error response 01:01:19.504 GoRPCClient: error on JSON-RPC call 01:01:19.504 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:19.504 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:19.504 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:19.504 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:19.504 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 01:01:19.504 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 01:01:19.504 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:01:19.504 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 01:01:20.072 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 01:01:20.073 2024/12/09 06:00:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:01:20.073 request: 01:01:20.073 { 01:01:20.073 "method": "bdev_nvme_attach_controller", 01:01:20.073 "params": { 01:01:20.073 "name": "nvme0", 01:01:20.073 "trtype": "tcp", 01:01:20.073 "traddr": "10.0.0.3", 01:01:20.073 "adrfam": "ipv4", 01:01:20.073 "trsvcid": "4420", 01:01:20.073 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:01:20.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:20.073 "prchk_reftag": false, 01:01:20.073 "prchk_guard": false, 01:01:20.073 "hdgst": false, 01:01:20.073 "ddgst": false, 01:01:20.073 "dhchap_key": "key3", 01:01:20.073 "allow_unrecognized_csi": false 01:01:20.073 } 01:01:20.073 } 01:01:20.073 Got JSON-RPC error response 01:01:20.073 GoRPCClient: error on JSON-RPC call 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:01:20.073 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:01:20.332 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:20.332 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:20.332 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:01:20.591 06:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 01:01:20.850 2024/12/09 06:00:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:01:20.850 request: 01:01:20.850 { 01:01:20.850 "method": "bdev_nvme_attach_controller", 01:01:20.850 "params": { 01:01:20.850 "name": "nvme0", 01:01:20.850 "trtype": "tcp", 01:01:20.850 "traddr": "10.0.0.3", 01:01:20.850 "adrfam": "ipv4", 01:01:20.850 "trsvcid": "4420", 01:01:20.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:01:20.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:20.850 "prchk_reftag": false, 01:01:20.850 "prchk_guard": false, 01:01:20.850 "hdgst": false, 01:01:20.850 "ddgst": false, 01:01:20.850 "dhchap_key": "key0", 01:01:20.850 "dhchap_ctrlr_key": "key1", 01:01:20.850 "allow_unrecognized_csi": false 01:01:20.850 } 01:01:20.850 } 01:01:20.850 Got JSON-RPC error response 01:01:20.850 GoRPCClient: error on JSON-RPC call 01:01:20.850 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:20.850 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:20.850 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:20.850 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:20.850 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 01:01:20.850 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 01:01:20.850 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 01:01:21.109 nvme0n1 01:01:21.109 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 01:01:21.109 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:21.109 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 01:01:21.367 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:21.367 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:21.367 06:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:21.626 06:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 01:01:21.626 06:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:21.626 06:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:21.626 06:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:21.626 06:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 01:01:21.626 06:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:01:21.626 06:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:01:22.561 nvme0n1 01:01:22.562 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 01:01:22.562 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 01:01:22.562 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:22.820 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:22.820 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:22.820 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:22.820 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:22.820 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:22.820 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 01:01:22.820 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 01:01:22.820 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:23.078 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:23.078 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:01:23.078 06:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid 4083adec-450d-4b97-8986-2f4423606fc2 -l 0 --dhchap-secret DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: --dhchap-ctrl-secret DHHC-1:03:NGFhNDE2MTNhMjVhNmUxMzhmZTNiMGEwNWM4MDdjMjAxZWQ1YjFhODA4MWFhNzE2YmMzMWE1MDA4NDYzNGQ3NBSYidM=: 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:23.645 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:24.212 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 01:01:24.212 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:24.212 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 01:01:24.212 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 01:01:24.212 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:24.212 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 01:01:24.212 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:24.212 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 01:01:24.213 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:01:24.213 06:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 01:01:24.488 2024/12/09 06:00:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:01:24.488 request: 01:01:24.488 { 01:01:24.488 "method": "bdev_nvme_attach_controller", 01:01:24.488 "params": { 01:01:24.488 "name": "nvme0", 01:01:24.488 "trtype": "tcp", 01:01:24.488 "traddr": "10.0.0.3", 01:01:24.488 "adrfam": "ipv4", 01:01:24.488 "trsvcid": "4420", 01:01:24.488 "subnqn": "nqn.2024-03.io.spdk:cnode0", 01:01:24.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2", 01:01:24.488 "prchk_reftag": false, 01:01:24.488 "prchk_guard": false, 01:01:24.488 "hdgst": false, 01:01:24.488 "ddgst": false, 01:01:24.488 "dhchap_key": "key1", 01:01:24.488 "allow_unrecognized_csi": false 01:01:24.488 } 01:01:24.488 } 01:01:24.488 Got JSON-RPC error response 01:01:24.488 GoRPCClient: error on JSON-RPC call 01:01:24.488 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:24.488 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:24.488 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:24.488 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:24.488 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:24.488 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:24.488 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:25.424 nvme0n1 01:01:25.424 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 01:01:25.424 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 01:01:25.424 06:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:25.682 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:25.682 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:25.682 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:25.941 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:25.941 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:25.941 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:25.941 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:25.941 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 01:01:25.941 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 01:01:25.941 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 01:01:26.199 nvme0n1 01:01:26.199 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 01:01:26.199 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 01:01:26.199 06:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:26.765 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:26.765 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 01:01:26.765 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key key3 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: '' 2s 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: ]] 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTllNWFiYzU4N2U1NTEwZTA3MmNkMzA0YmJkODE5ZDGPZmkp: 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 01:01:27.023 06:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key1 --dhchap-ctrlr-key key2 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: 2s 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: 01:01:28.940 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 01:01:28.941 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 01:01:28.941 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 01:01:28.941 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: ]] 01:01:28.941 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzA0YWZhYTZlYjZiZTNlZTY1MTlkMjk5MGZiNzNmODdiZWMzMGNiZjlmMjA4ZTczoBp6Jw==: 01:01:28.941 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 01:01:28.941 06:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 01:01:30.843 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 01:01:30.843 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 01:01:30.843 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:01:30.843 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 01:01:31.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key key1 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:01:31.100 06:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:01:32.035 nvme0n1 01:01:32.035 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:32.035 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:32.035 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:32.035 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:32.035 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:32.035 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:32.602 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 01:01:32.602 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:32.602 06:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 01:01:32.860 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:32.860 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:32.860 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:32.860 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:32.860 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:32.860 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 01:01:32.860 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 01:01:33.119 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 01:01:33.119 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 01:01:33.119 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:01:33.378 06:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 01:01:33.945 2024/12/09 06:00:28 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 01:01:33.945 request: 01:01:33.945 { 01:01:33.945 "method": "bdev_nvme_set_keys", 01:01:33.945 "params": { 01:01:33.945 "name": "nvme0", 01:01:33.945 "dhchap_key": "key1", 01:01:33.945 "dhchap_ctrlr_key": "key3" 01:01:33.945 } 01:01:33.945 } 01:01:33.945 Got JSON-RPC error response 01:01:33.945 GoRPCClient: error on JSON-RPC call 01:01:33.945 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:33.945 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:33.945 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:33.945 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:33.945 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 01:01:33.945 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:33.945 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 01:01:34.203 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 01:01:34.203 06:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 01:01:35.139 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 01:01:35.139 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 01:01:35.139 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:35.397 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 01:01:35.397 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key0 --dhchap-ctrlr-key key1 01:01:35.397 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:35.397 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:35.397 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:35.397 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:01:35.397 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:01:35.397 06:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:01:36.329 nvme0n1 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --dhchap-key key2 --dhchap-ctrlr-key key3 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:01:36.329 06:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 01:01:36.894 2024/12/09 06:00:31 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 01:01:36.894 request: 01:01:36.894 { 01:01:36.894 "method": "bdev_nvme_set_keys", 01:01:36.894 "params": { 01:01:36.894 "name": "nvme0", 01:01:36.894 "dhchap_key": "key2", 01:01:36.894 "dhchap_ctrlr_key": "key0" 01:01:36.894 } 01:01:36.894 } 01:01:36.894 Got JSON-RPC error response 01:01:36.894 GoRPCClient: error on JSON-RPC call 01:01:36.894 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 01:01:36.894 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:01:36.894 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:01:36.894 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:01:36.894 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 01:01:36.894 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 01:01:36.894 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:37.459 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 01:01:37.459 06:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 01:01:38.403 06:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 01:01:38.403 06:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 01:01:38.403 06:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76007 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76007 ']' 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76007 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76007 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:01:38.660 killing process with pid 76007 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76007' 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76007 01:01:38.660 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76007 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:38.917 rmmod nvme_tcp 01:01:38.917 rmmod nvme_fabrics 01:01:38.917 rmmod nvme_keyring 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 80818 ']' 01:01:38.917 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 80818 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 80818 ']' 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 80818 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80818 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:01:38.918 killing process with pid 80818 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80818' 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 80818 01:01:38.918 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 80818 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:39.176 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aHB /tmp/spdk.key-sha256.nUT /tmp/spdk.key-sha384.ib1 /tmp/spdk.key-sha512.8gG /tmp/spdk.key-sha512.7fi /tmp/spdk.key-sha384.se5 /tmp/spdk.key-sha256.i3i '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 01:01:39.436 01:01:39.436 real 3m2.954s 01:01:39.436 user 7m25.813s 01:01:39.436 sys 0m22.096s 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:39.436 ************************************ 01:01:39.436 END TEST nvmf_auth_target 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 01:01:39.436 ************************************ 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:01:39.436 ************************************ 01:01:39.436 START TEST nvmf_bdevio_no_huge 01:01:39.436 ************************************ 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 01:01:39.436 * Looking for test storage... 01:01:39.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 01:01:39.436 06:00:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:39.436 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:39.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:39.696 --rc genhtml_branch_coverage=1 01:01:39.696 --rc genhtml_function_coverage=1 01:01:39.696 --rc genhtml_legend=1 01:01:39.696 --rc geninfo_all_blocks=1 01:01:39.696 --rc geninfo_unexecuted_blocks=1 01:01:39.696 01:01:39.696 ' 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:39.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:39.696 --rc genhtml_branch_coverage=1 01:01:39.696 --rc genhtml_function_coverage=1 01:01:39.696 --rc genhtml_legend=1 01:01:39.696 --rc geninfo_all_blocks=1 01:01:39.696 --rc geninfo_unexecuted_blocks=1 01:01:39.696 01:01:39.696 ' 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:39.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:39.696 --rc genhtml_branch_coverage=1 01:01:39.696 --rc genhtml_function_coverage=1 01:01:39.696 --rc genhtml_legend=1 01:01:39.696 --rc geninfo_all_blocks=1 01:01:39.696 --rc geninfo_unexecuted_blocks=1 01:01:39.696 01:01:39.696 ' 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:39.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:39.696 --rc genhtml_branch_coverage=1 01:01:39.696 --rc genhtml_function_coverage=1 01:01:39.696 --rc genhtml_legend=1 01:01:39.696 --rc geninfo_all_blocks=1 01:01:39.696 --rc geninfo_unexecuted_blocks=1 01:01:39.696 01:01:39.696 ' 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:39.696 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:39.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:01:39.697 Cannot find device "nvmf_init_br" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:01:39.697 Cannot find device "nvmf_init_br2" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:01:39.697 Cannot find device "nvmf_tgt_br" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:01:39.697 Cannot find device "nvmf_tgt_br2" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:01:39.697 Cannot find device "nvmf_init_br" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:01:39.697 Cannot find device "nvmf_init_br2" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:01:39.697 Cannot find device "nvmf_tgt_br" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:01:39.697 Cannot find device "nvmf_tgt_br2" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:01:39.697 Cannot find device "nvmf_br" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:01:39.697 Cannot find device "nvmf_init_if" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:01:39.697 Cannot find device "nvmf_init_if2" 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:39.697 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:39.697 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:01:39.697 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:01:39.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:39.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 01:01:39.957 01:01:39.957 --- 10.0.0.3 ping statistics --- 01:01:39.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:39.957 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:01:39.957 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:01:39.957 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 01:01:39.957 01:01:39.957 --- 10.0.0.4 ping statistics --- 01:01:39.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:39.957 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:39.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:39.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:01:39.957 01:01:39.957 --- 10.0.0.1 ping statistics --- 01:01:39.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:39.957 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:01:39.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:39.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 01:01:39.957 01:01:39.957 --- 10.0.0.2 ping statistics --- 01:01:39.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:39.957 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=81661 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 81661 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 81661 ']' 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:39.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:39.957 06:00:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:39.957 [2024-12-09 06:00:34.527091] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:39.957 [2024-12-09 06:00:34.527192] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 01:01:40.216 [2024-12-09 06:00:34.690517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:01:40.216 [2024-12-09 06:00:34.763847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:40.216 [2024-12-09 06:00:34.763899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:40.216 [2024-12-09 06:00:34.763924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:40.216 [2024-12-09 06:00:34.763934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:40.216 [2024-12-09 06:00:34.763943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:40.216 [2024-12-09 06:00:34.764533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:01:40.216 [2024-12-09 06:00:34.764710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:01:40.216 [2024-12-09 06:00:34.764835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:01:40.217 [2024-12-09 06:00:34.764836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:41.152 [2024-12-09 06:00:35.468721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:41.152 Malloc0 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:41.152 [2024-12-09 06:00:35.506119] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:01:41.152 { 01:01:41.152 "params": { 01:01:41.152 "name": "Nvme$subsystem", 01:01:41.152 "trtype": "$TEST_TRANSPORT", 01:01:41.152 "traddr": "$NVMF_FIRST_TARGET_IP", 01:01:41.152 "adrfam": "ipv4", 01:01:41.152 "trsvcid": "$NVMF_PORT", 01:01:41.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:01:41.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:01:41.152 "hdgst": ${hdgst:-false}, 01:01:41.152 "ddgst": ${ddgst:-false} 01:01:41.152 }, 01:01:41.152 "method": "bdev_nvme_attach_controller" 01:01:41.152 } 01:01:41.152 EOF 01:01:41.152 )") 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 01:01:41.152 06:00:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:01:41.152 "params": { 01:01:41.152 "name": "Nvme1", 01:01:41.152 "trtype": "tcp", 01:01:41.152 "traddr": "10.0.0.3", 01:01:41.152 "adrfam": "ipv4", 01:01:41.152 "trsvcid": "4420", 01:01:41.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:01:41.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:01:41.152 "hdgst": false, 01:01:41.152 "ddgst": false 01:01:41.152 }, 01:01:41.152 "method": "bdev_nvme_attach_controller" 01:01:41.152 }' 01:01:41.152 [2024-12-09 06:00:35.571377] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:41.152 [2024-12-09 06:00:35.571468] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid81715 ] 01:01:41.152 [2024-12-09 06:00:35.732552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:01:41.411 [2024-12-09 06:00:35.789676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:41.411 [2024-12-09 06:00:35.789787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:01:41.411 [2024-12-09 06:00:35.789793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:01:41.411 I/O targets: 01:01:41.411 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:01:41.411 01:01:41.411 01:01:41.411 CUnit - A unit testing framework for C - Version 2.1-3 01:01:41.411 http://cunit.sourceforge.net/ 01:01:41.411 01:01:41.411 01:01:41.411 Suite: bdevio tests on: Nvme1n1 01:01:41.669 Test: blockdev write read block ...passed 01:01:41.669 Test: blockdev write zeroes read block ...passed 01:01:41.669 Test: blockdev write zeroes read no split ...passed 01:01:41.669 Test: blockdev write zeroes read split ...passed 01:01:41.669 Test: blockdev write zeroes read split partial ...passed 01:01:41.669 Test: blockdev reset ...[2024-12-09 06:00:36.116221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:01:41.669 [2024-12-09 06:00:36.116334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5beb0 (9): Bad file descriptor 01:01:41.669 [2024-12-09 06:00:36.129509] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:01:41.669 passed 01:01:41.669 Test: blockdev write read 8 blocks ...passed 01:01:41.669 Test: blockdev write read size > 128k ...passed 01:01:41.669 Test: blockdev write read invalid size ...passed 01:01:41.669 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:01:41.669 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:01:41.669 Test: blockdev write read max offset ...passed 01:01:41.928 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:01:41.928 Test: blockdev writev readv 8 blocks ...passed 01:01:41.928 Test: blockdev writev readv 30 x 1block ...passed 01:01:41.928 Test: blockdev writev readv block ...passed 01:01:41.929 Test: blockdev writev readv size > 128k ...passed 01:01:41.929 Test: blockdev writev readv size > 128k in two iovs ...passed 01:01:41.929 Test: blockdev comparev and writev ...[2024-12-09 06:00:36.306441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:01:41.929 [2024-12-09 06:00:36.306507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.306529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:01:41.929 [2024-12-09 06:00:36.306540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.307114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:01:41.929 [2024-12-09 06:00:36.307148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.307167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:01:41.929 [2024-12-09 06:00:36.307177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.307606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:01:41.929 [2024-12-09 06:00:36.307638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.307667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:01:41.929 [2024-12-09 06:00:36.307696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.308086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:01:41.929 [2024-12-09 06:00:36.308118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.308136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:01:41.929 [2024-12-09 06:00:36.308146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:01:41.929 passed 01:01:41.929 Test: blockdev nvme passthru rw ...passed 01:01:41.929 Test: blockdev nvme passthru vendor specific ...[2024-12-09 06:00:36.391133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:01:41.929 [2024-12-09 06:00:36.391184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.391482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:01:41.929 [2024-12-09 06:00:36.391509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.391792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:01:41.929 [2024-12-09 06:00:36.391825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:01:41.929 [2024-12-09 06:00:36.392173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:01:41.929 [2024-12-09 06:00:36.392206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:01:41.929 passed 01:01:41.929 Test: blockdev nvme admin passthru ...passed 01:01:41.929 Test: blockdev copy ...passed 01:01:41.929 01:01:41.929 Run Summary: Type Total Ran Passed Failed Inactive 01:01:41.929 suites 1 1 n/a 0 0 01:01:41.929 tests 23 23 23 0 0 01:01:41.929 asserts 152 152 152 0 n/a 01:01:41.929 01:01:41.929 Elapsed time = 0.915 seconds 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 01:01:42.497 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:01:42.498 rmmod nvme_tcp 01:01:42.498 rmmod nvme_fabrics 01:01:42.498 rmmod nvme_keyring 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 81661 ']' 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 81661 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 81661 ']' 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 81661 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81661 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:01:42.498 killing process with pid 81661 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81661' 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 81661 01:01:42.498 06:00:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 81661 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:01:42.757 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 01:01:43.016 01:01:43.016 real 0m3.644s 01:01:43.016 user 0m12.166s 01:01:43.016 sys 0m1.334s 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 01:01:43.016 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 01:01:43.016 ************************************ 01:01:43.016 END TEST nvmf_bdevio_no_huge 01:01:43.016 ************************************ 01:01:43.017 06:00:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:01:43.017 06:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:01:43.017 06:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:01:43.017 06:00:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:01:43.017 ************************************ 01:01:43.017 START TEST nvmf_tls 01:01:43.017 ************************************ 01:01:43.017 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 01:01:43.277 * Looking for test storage... 01:01:43.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:01:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:43.277 --rc genhtml_branch_coverage=1 01:01:43.277 --rc genhtml_function_coverage=1 01:01:43.277 --rc genhtml_legend=1 01:01:43.277 --rc geninfo_all_blocks=1 01:01:43.277 --rc geninfo_unexecuted_blocks=1 01:01:43.277 01:01:43.277 ' 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:01:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:43.277 --rc genhtml_branch_coverage=1 01:01:43.277 --rc genhtml_function_coverage=1 01:01:43.277 --rc genhtml_legend=1 01:01:43.277 --rc geninfo_all_blocks=1 01:01:43.277 --rc geninfo_unexecuted_blocks=1 01:01:43.277 01:01:43.277 ' 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:01:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:43.277 --rc genhtml_branch_coverage=1 01:01:43.277 --rc genhtml_function_coverage=1 01:01:43.277 --rc genhtml_legend=1 01:01:43.277 --rc geninfo_all_blocks=1 01:01:43.277 --rc geninfo_unexecuted_blocks=1 01:01:43.277 01:01:43.277 ' 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:01:43.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:01:43.277 --rc genhtml_branch_coverage=1 01:01:43.277 --rc genhtml_function_coverage=1 01:01:43.277 --rc genhtml_legend=1 01:01:43.277 --rc geninfo_all_blocks=1 01:01:43.277 --rc geninfo_unexecuted_blocks=1 01:01:43.277 01:01:43.277 ' 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:43.277 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:01:43.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:01:43.278 Cannot find device "nvmf_init_br" 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:01:43.278 Cannot find device "nvmf_init_br2" 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:01:43.278 Cannot find device "nvmf_tgt_br" 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:01:43.278 Cannot find device "nvmf_tgt_br2" 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:01:43.278 Cannot find device "nvmf_init_br" 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:01:43.278 Cannot find device "nvmf_init_br2" 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:01:43.278 Cannot find device "nvmf_tgt_br" 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:01:43.278 Cannot find device "nvmf_tgt_br2" 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 01:01:43.278 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:01:43.537 Cannot find device "nvmf_br" 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:01:43.537 Cannot find device "nvmf_init_if" 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:01:43.537 Cannot find device "nvmf_init_if2" 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:01:43.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:01:43.537 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:01:43.537 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:01:43.538 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:01:43.538 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:01:43.538 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:01:43.538 06:00:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:01:43.538 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:01:43.538 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 01:01:43.538 01:01:43.538 --- 10.0.0.3 ping statistics --- 01:01:43.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:43.538 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:01:43.538 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:01:43.797 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:01:43.797 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 01:01:43.797 01:01:43.797 --- 10.0.0.4 ping statistics --- 01:01:43.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:43.797 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 01:01:43.797 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:01:43.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:43.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 01:01:43.797 01:01:43.797 --- 10.0.0.1 ping statistics --- 01:01:43.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:43.797 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 01:01:43.797 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:01:43.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:43.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 01:01:43.797 01:01:43.797 --- 10.0.0.2 ping statistics --- 01:01:43.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:43.797 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 01:01:43.797 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:43.797 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 01:01:43.797 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=81951 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 81951 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 81951 ']' 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:01:43.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:01:43.798 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:01:43.798 [2024-12-09 06:00:38.222376] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:01:43.798 [2024-12-09 06:00:38.222457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:43.798 [2024-12-09 06:00:38.376243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:01:44.057 [2024-12-09 06:00:38.414931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:44.057 [2024-12-09 06:00:38.414991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:44.057 [2024-12-09 06:00:38.415005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:44.057 [2024-12-09 06:00:38.415015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:44.057 [2024-12-09 06:00:38.415024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:44.057 [2024-12-09 06:00:38.415384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:01:44.057 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:01:44.057 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:01:44.057 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:01:44.057 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:01:44.057 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:01:44.057 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:44.057 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 01:01:44.057 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 01:01:44.315 true 01:01:44.315 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:01:44.315 06:00:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 01:01:44.574 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 01:01:44.574 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 01:01:44.574 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:01:44.834 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:01:44.834 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 01:01:45.092 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 01:01:45.092 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 01:01:45.092 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 01:01:45.351 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:01:45.351 06:00:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 01:01:45.610 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 01:01:45.610 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 01:01:45.610 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:01:45.610 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 01:01:45.868 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 01:01:45.868 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 01:01:45.868 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 01:01:46.127 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 01:01:46.127 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:01:46.386 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 01:01:46.386 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 01:01:46.386 06:00:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 01:01:46.645 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 01:01:46.645 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 01:01:46.904 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.9KorrCRyuo 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.pAF3beNQfE 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9KorrCRyuo 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.pAF3beNQfE 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 01:01:47.163 06:00:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 01:01:47.730 06:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.9KorrCRyuo 01:01:47.730 06:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9KorrCRyuo 01:01:47.730 06:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:01:47.730 [2024-12-09 06:00:42.286864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:47.731 06:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:01:47.989 06:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:01:48.248 [2024-12-09 06:00:42.771161] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:01:48.248 [2024-12-09 06:00:42.771350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:01:48.248 06:00:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:01:48.509 malloc0 01:01:48.509 06:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:01:48.783 06:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9KorrCRyuo 01:01:49.067 06:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:01:49.334 06:00:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.9KorrCRyuo 01:02:01.552 Initializing NVMe Controllers 01:02:01.552 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:02:01.552 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:02:01.552 Initialization complete. Launching workers. 01:02:01.552 ======================================================== 01:02:01.552 Latency(us) 01:02:01.552 Device Information : IOPS MiB/s Average min max 01:02:01.552 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11818.03 46.16 5416.36 890.24 9401.36 01:02:01.552 ======================================================== 01:02:01.552 Total : 11818.03 46.16 5416.36 890.24 9401.36 01:02:01.552 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9KorrCRyuo 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9KorrCRyuo 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82304 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82304 /var/tmp/bdevperf.sock 01:02:01.552 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82304 ']' 01:02:01.553 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:01.553 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:01.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:01.553 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:01.553 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:01.553 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:01.553 06:00:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:02:01.553 [2024-12-09 06:00:54.046687] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:01.553 [2024-12-09 06:00:54.046785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82304 ] 01:02:01.553 [2024-12-09 06:00:54.200130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:01.553 [2024-12-09 06:00:54.240591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:01.553 06:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:01.553 06:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:01.553 06:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9KorrCRyuo 01:02:01.553 06:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:02:01.553 [2024-12-09 06:00:54.782499] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:01.553 TLSTESTn1 01:02:01.553 06:00:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:02:01.553 Running I/O for 10 seconds... 01:02:02.485 4914.00 IOPS, 19.20 MiB/s [2024-12-09T06:00:58.445Z] 4942.00 IOPS, 19.30 MiB/s [2024-12-09T06:00:59.014Z] 4960.00 IOPS, 19.38 MiB/s [2024-12-09T06:01:00.392Z] 4957.25 IOPS, 19.36 MiB/s [2024-12-09T06:01:01.329Z] 4957.40 IOPS, 19.36 MiB/s [2024-12-09T06:01:02.267Z] 4956.83 IOPS, 19.36 MiB/s [2024-12-09T06:01:03.207Z] 4960.43 IOPS, 19.38 MiB/s [2024-12-09T06:01:04.145Z] 4963.88 IOPS, 19.39 MiB/s [2024-12-09T06:01:05.084Z] 4964.67 IOPS, 19.39 MiB/s [2024-12-09T06:01:05.084Z] 4964.70 IOPS, 19.39 MiB/s 01:02:10.498 Latency(us) 01:02:10.498 [2024-12-09T06:01:05.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:10.498 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:02:10.498 Verification LBA range: start 0x0 length 0x2000 01:02:10.498 TLSTESTn1 : 10.01 4970.56 19.42 0.00 0.00 25708.54 4230.05 20494.89 01:02:10.498 [2024-12-09T06:01:05.084Z] =================================================================================================================== 01:02:10.498 [2024-12-09T06:01:05.084Z] Total : 4970.56 19.42 0.00 0.00 25708.54 4230.05 20494.89 01:02:10.498 { 01:02:10.498 "results": [ 01:02:10.498 { 01:02:10.498 "job": "TLSTESTn1", 01:02:10.498 "core_mask": "0x4", 01:02:10.498 "workload": "verify", 01:02:10.498 "status": "finished", 01:02:10.498 "verify_range": { 01:02:10.498 "start": 0, 01:02:10.498 "length": 8192 01:02:10.498 }, 01:02:10.498 "queue_depth": 128, 01:02:10.498 "io_size": 4096, 01:02:10.498 "runtime": 10.01377, 01:02:10.498 "iops": 4970.555545014515, 01:02:10.498 "mibps": 19.41623259771295, 01:02:10.498 "io_failed": 0, 01:02:10.498 "io_timeout": 0, 01:02:10.498 "avg_latency_us": 25708.536361809925, 01:02:10.498 "min_latency_us": 4230.050909090909, 01:02:10.498 "max_latency_us": 20494.894545454546 01:02:10.498 } 01:02:10.498 ], 01:02:10.498 "core_count": 1 01:02:10.498 } 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82304 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82304 ']' 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82304 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82304 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:10.498 killing process with pid 82304 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82304' 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82304 01:02:10.498 Received shutdown signal, test time was about 10.000000 seconds 01:02:10.498 01:02:10.498 Latency(us) 01:02:10.498 [2024-12-09T06:01:05.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:10.498 [2024-12-09T06:01:05.084Z] =================================================================================================================== 01:02:10.498 [2024-12-09T06:01:05.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:10.498 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82304 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pAF3beNQfE 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pAF3beNQfE 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pAF3beNQfE 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pAF3beNQfE 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82449 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82449 /var/tmp/bdevperf.sock 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82449 ']' 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:10.756 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:10.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:10.757 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:10.757 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:10.757 06:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:10.757 [2024-12-09 06:01:05.261282] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:10.757 [2024-12-09 06:01:05.261381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82449 ] 01:02:11.015 [2024-12-09 06:01:05.401766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:11.015 [2024-12-09 06:01:05.431456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:11.948 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:11.948 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:11.948 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pAF3beNQfE 01:02:11.948 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:02:12.207 [2024-12-09 06:01:06.707452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:12.207 [2024-12-09 06:01:06.716135] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:02:12.207 [2024-12-09 06:01:06.716268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203b620 (107): Transport endpoint is not connected 01:02:12.207 [2024-12-09 06:01:06.717261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203b620 (9): Bad file descriptor 01:02:12.207 [2024-12-09 06:01:06.718256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 01:02:12.207 [2024-12-09 06:01:06.718299] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:02:12.207 [2024-12-09 06:01:06.718324] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 01:02:12.207 [2024-12-09 06:01:06.718340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 01:02:12.207 2024/12/09 06:01:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:02:12.207 request: 01:02:12.207 { 01:02:12.207 "method": "bdev_nvme_attach_controller", 01:02:12.207 "params": { 01:02:12.207 "name": "TLSTEST", 01:02:12.207 "trtype": "tcp", 01:02:12.207 "traddr": "10.0.0.3", 01:02:12.207 "adrfam": "ipv4", 01:02:12.208 "trsvcid": "4420", 01:02:12.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:12.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:12.208 "prchk_reftag": false, 01:02:12.208 "prchk_guard": false, 01:02:12.208 "hdgst": false, 01:02:12.208 "ddgst": false, 01:02:12.208 "psk": "key0", 01:02:12.208 "allow_unrecognized_csi": false 01:02:12.208 } 01:02:12.208 } 01:02:12.208 Got JSON-RPC error response 01:02:12.208 GoRPCClient: error on JSON-RPC call 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82449 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82449 ']' 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82449 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82449 01:02:12.208 killing process with pid 82449 01:02:12.208 Received shutdown signal, test time was about 10.000000 seconds 01:02:12.208 01:02:12.208 Latency(us) 01:02:12.208 [2024-12-09T06:01:06.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:12.208 [2024-12-09T06:01:06.794Z] =================================================================================================================== 01:02:12.208 [2024-12-09T06:01:06.794Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82449' 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82449 01:02:12.208 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82449 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9KorrCRyuo 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9KorrCRyuo 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:02:12.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.9KorrCRyuo 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9KorrCRyuo 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82501 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82501 /var/tmp/bdevperf.sock 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82501 ']' 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:12.466 06:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:12.466 [2024-12-09 06:01:06.950479] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:12.466 [2024-12-09 06:01:06.950823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82501 ] 01:02:12.725 [2024-12-09 06:01:07.096635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:12.725 [2024-12-09 06:01:07.129428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:13.291 06:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:13.291 06:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:13.291 06:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9KorrCRyuo 01:02:13.549 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 01:02:13.807 [2024-12-09 06:01:08.280098] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:13.807 [2024-12-09 06:01:08.286126] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:02:13.807 [2024-12-09 06:01:08.286178] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 01:02:13.807 [2024-12-09 06:01:08.286237] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:02:13.807 [2024-12-09 06:01:08.286826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6eb620 (107): Transport endpoint is not connected 01:02:13.807 [2024-12-09 06:01:08.287810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6eb620 (9): Bad file descriptor 01:02:13.807 [2024-12-09 06:01:08.288806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 01:02:13.807 [2024-12-09 06:01:08.288839] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:02:13.807 [2024-12-09 06:01:08.288849] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 01:02:13.807 [2024-12-09 06:01:08.288866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 01:02:13.807 2024/12/09 06:01:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:02:13.807 request: 01:02:13.807 { 01:02:13.807 "method": "bdev_nvme_attach_controller", 01:02:13.807 "params": { 01:02:13.807 "name": "TLSTEST", 01:02:13.807 "trtype": "tcp", 01:02:13.807 "traddr": "10.0.0.3", 01:02:13.807 "adrfam": "ipv4", 01:02:13.807 "trsvcid": "4420", 01:02:13.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:13.807 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:02:13.807 "prchk_reftag": false, 01:02:13.807 "prchk_guard": false, 01:02:13.807 "hdgst": false, 01:02:13.807 "ddgst": false, 01:02:13.807 "psk": "key0", 01:02:13.807 "allow_unrecognized_csi": false 01:02:13.807 } 01:02:13.807 } 01:02:13.807 Got JSON-RPC error response 01:02:13.807 GoRPCClient: error on JSON-RPC call 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82501 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82501 ']' 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82501 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82501 01:02:13.807 killing process with pid 82501 01:02:13.807 Received shutdown signal, test time was about 10.000000 seconds 01:02:13.807 01:02:13.807 Latency(us) 01:02:13.807 [2024-12-09T06:01:08.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:13.807 [2024-12-09T06:01:08.393Z] =================================================================================================================== 01:02:13.807 [2024-12-09T06:01:08.393Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82501' 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82501 01:02:13.807 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82501 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9KorrCRyuo 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9KorrCRyuo 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.9KorrCRyuo 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 01:02:14.065 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9KorrCRyuo 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82554 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82554 /var/tmp/bdevperf.sock 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82554 ']' 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:14.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:14.066 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:14.066 [2024-12-09 06:01:08.525833] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:14.066 [2024-12-09 06:01:08.526114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82554 ] 01:02:14.324 [2024-12-09 06:01:08.671688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:14.324 [2024-12-09 06:01:08.700422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:14.324 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:14.324 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:14.324 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9KorrCRyuo 01:02:14.583 06:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 01:02:14.841 [2024-12-09 06:01:09.259177] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:14.841 [2024-12-09 06:01:09.266149] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:02:14.841 [2024-12-09 06:01:09.266200] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 01:02:14.841 [2024-12-09 06:01:09.266259] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:02:14.841 [2024-12-09 06:01:09.266937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9d620 (107): Transport endpoint is not connected 01:02:14.841 [2024-12-09 06:01:09.267895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9d620 (9): Bad file descriptor 01:02:14.841 [2024-12-09 06:01:09.268891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 01:02:14.841 [2024-12-09 06:01:09.268931] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 01:02:14.841 [2024-12-09 06:01:09.268957] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 01:02:14.841 [2024-12-09 06:01:09.268972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 01:02:14.842 2024/12/09 06:01:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:02:14.842 request: 01:02:14.842 { 01:02:14.842 "method": "bdev_nvme_attach_controller", 01:02:14.842 "params": { 01:02:14.842 "name": "TLSTEST", 01:02:14.842 "trtype": "tcp", 01:02:14.842 "traddr": "10.0.0.3", 01:02:14.842 "adrfam": "ipv4", 01:02:14.842 "trsvcid": "4420", 01:02:14.842 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:02:14.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:14.842 "prchk_reftag": false, 01:02:14.842 "prchk_guard": false, 01:02:14.842 "hdgst": false, 01:02:14.842 "ddgst": false, 01:02:14.842 "psk": "key0", 01:02:14.842 "allow_unrecognized_csi": false 01:02:14.842 } 01:02:14.842 } 01:02:14.842 Got JSON-RPC error response 01:02:14.842 GoRPCClient: error on JSON-RPC call 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82554 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82554 ']' 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82554 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82554 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:14.842 killing process with pid 82554 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82554' 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82554 01:02:14.842 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82554 01:02:14.842 Received shutdown signal, test time was about 10.000000 seconds 01:02:14.842 01:02:14.842 Latency(us) 01:02:14.842 [2024-12-09T06:01:09.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:14.842 [2024-12-09T06:01:09.428Z] =================================================================================================================== 01:02:14.842 [2024-12-09T06:01:09.428Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82593 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82593 /var/tmp/bdevperf.sock 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82593 ']' 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:15.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:15.100 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:15.100 [2024-12-09 06:01:09.489238] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:15.100 [2024-12-09 06:01:09.489339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82593 ] 01:02:15.100 [2024-12-09 06:01:09.621563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:15.100 [2024-12-09 06:01:09.650582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:15.358 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:15.359 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:15.359 06:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 01:02:15.617 [2024-12-09 06:01:09.982156] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 01:02:15.617 [2024-12-09 06:01:09.982208] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:02:15.617 2024/12/09 06:01:09 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:02:15.617 request: 01:02:15.617 { 01:02:15.617 "method": "keyring_file_add_key", 01:02:15.617 "params": { 01:02:15.617 "name": "key0", 01:02:15.617 "path": "" 01:02:15.617 } 01:02:15.617 } 01:02:15.617 Got JSON-RPC error response 01:02:15.617 GoRPCClient: error on JSON-RPC call 01:02:15.617 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:02:15.875 [2024-12-09 06:01:10.270560] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:15.875 [2024-12-09 06:01:10.270642] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 01:02:15.875 2024/12/09 06:01:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 01:02:15.875 request: 01:02:15.875 { 01:02:15.875 "method": "bdev_nvme_attach_controller", 01:02:15.875 "params": { 01:02:15.875 "name": "TLSTEST", 01:02:15.875 "trtype": "tcp", 01:02:15.875 "traddr": "10.0.0.3", 01:02:15.875 "adrfam": "ipv4", 01:02:15.875 "trsvcid": "4420", 01:02:15.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:15.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:15.875 "prchk_reftag": false, 01:02:15.875 "prchk_guard": false, 01:02:15.875 "hdgst": false, 01:02:15.875 "ddgst": false, 01:02:15.875 "psk": "key0", 01:02:15.875 "allow_unrecognized_csi": false 01:02:15.875 } 01:02:15.875 } 01:02:15.875 Got JSON-RPC error response 01:02:15.875 GoRPCClient: error on JSON-RPC call 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82593 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82593 ']' 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82593 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82593 01:02:15.875 killing process with pid 82593 01:02:15.875 Received shutdown signal, test time was about 10.000000 seconds 01:02:15.875 01:02:15.875 Latency(us) 01:02:15.875 [2024-12-09T06:01:10.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:15.875 [2024-12-09T06:01:10.461Z] =================================================================================================================== 01:02:15.875 [2024-12-09T06:01:10.461Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82593' 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82593 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82593 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 81951 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 81951 ']' 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 81951 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:15.875 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81951 01:02:16.134 killing process with pid 81951 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81951' 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 81951 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 81951 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 01:02:16.134 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.2oyLjadhV8 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.2oyLjadhV8 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82642 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82642 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82642 ']' 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:16.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:16.135 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:16.395 [2024-12-09 06:01:10.734484] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:16.395 [2024-12-09 06:01:10.734592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:16.395 [2024-12-09 06:01:10.879062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:16.395 [2024-12-09 06:01:10.905997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:16.395 [2024-12-09 06:01:10.906050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:16.395 [2024-12-09 06:01:10.906075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:16.395 [2024-12-09 06:01:10.906082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:16.395 [2024-12-09 06:01:10.906088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:16.395 [2024-12-09 06:01:10.906389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:16.654 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:16.654 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:16.654 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:16.654 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:16.654 06:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:16.654 06:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:16.654 06:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.2oyLjadhV8 01:02:16.654 06:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2oyLjadhV8 01:02:16.654 06:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:02:16.913 [2024-12-09 06:01:11.318444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:16.913 06:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:02:17.171 06:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:02:17.430 [2024-12-09 06:01:11.826509] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:02:17.430 [2024-12-09 06:01:11.826775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:17.430 06:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:02:17.689 malloc0 01:02:17.689 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:02:17.948 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:17.948 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2oyLjadhV8 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2oyLjadhV8 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82738 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82738 /var/tmp/bdevperf.sock 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82738 ']' 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:18.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:18.207 06:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:18.207 [2024-12-09 06:01:12.784018] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:18.208 [2024-12-09 06:01:12.784133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82738 ] 01:02:18.466 [2024-12-09 06:01:12.932195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:18.466 [2024-12-09 06:01:12.971037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:18.725 06:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:18.725 06:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:18.725 06:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:18.725 06:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:02:18.984 [2024-12-09 06:01:13.479405] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:18.984 TLSTESTn1 01:02:18.984 06:01:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:02:19.243 Running I/O for 10 seconds... 01:02:21.116 4882.00 IOPS, 19.07 MiB/s [2024-12-09T06:01:17.077Z] 4927.00 IOPS, 19.25 MiB/s [2024-12-09T06:01:18.011Z] 4937.33 IOPS, 19.29 MiB/s [2024-12-09T06:01:18.943Z] 4937.25 IOPS, 19.29 MiB/s [2024-12-09T06:01:19.875Z] 4934.80 IOPS, 19.28 MiB/s [2024-12-09T06:01:20.807Z] 4938.17 IOPS, 19.29 MiB/s [2024-12-09T06:01:21.741Z] 4935.86 IOPS, 19.28 MiB/s [2024-12-09T06:01:22.677Z] 4942.38 IOPS, 19.31 MiB/s [2024-12-09T06:01:24.054Z] 4942.00 IOPS, 19.30 MiB/s [2024-12-09T06:01:24.054Z] 4941.40 IOPS, 19.30 MiB/s 01:02:29.468 Latency(us) 01:02:29.468 [2024-12-09T06:01:24.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:29.468 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:02:29.468 Verification LBA range: start 0x0 length 0x2000 01:02:29.468 TLSTESTn1 : 10.01 4947.27 19.33 0.00 0.00 25828.10 4736.47 22163.08 01:02:29.468 [2024-12-09T06:01:24.054Z] =================================================================================================================== 01:02:29.468 [2024-12-09T06:01:24.054Z] Total : 4947.27 19.33 0.00 0.00 25828.10 4736.47 22163.08 01:02:29.468 { 01:02:29.468 "results": [ 01:02:29.468 { 01:02:29.468 "job": "TLSTESTn1", 01:02:29.468 "core_mask": "0x4", 01:02:29.468 "workload": "verify", 01:02:29.468 "status": "finished", 01:02:29.468 "verify_range": { 01:02:29.468 "start": 0, 01:02:29.468 "length": 8192 01:02:29.468 }, 01:02:29.468 "queue_depth": 128, 01:02:29.468 "io_size": 4096, 01:02:29.468 "runtime": 10.014002, 01:02:29.468 "iops": 4947.272828585415, 01:02:29.468 "mibps": 19.325284486661776, 01:02:29.468 "io_failed": 0, 01:02:29.468 "io_timeout": 0, 01:02:29.468 "avg_latency_us": 25828.104482881376, 01:02:29.468 "min_latency_us": 4736.465454545454, 01:02:29.468 "max_latency_us": 22163.083636363637 01:02:29.468 } 01:02:29.468 ], 01:02:29.468 "core_count": 1 01:02:29.468 } 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82738 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82738 ']' 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82738 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82738 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:29.468 killing process with pid 82738 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82738' 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82738 01:02:29.468 Received shutdown signal, test time was about 10.000000 seconds 01:02:29.468 01:02:29.468 Latency(us) 01:02:29.468 [2024-12-09T06:01:24.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:29.468 [2024-12-09T06:01:24.054Z] =================================================================================================================== 01:02:29.468 [2024-12-09T06:01:24.054Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82738 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.2oyLjadhV8 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2oyLjadhV8 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2oyLjadhV8 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2oyLjadhV8 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2oyLjadhV8 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82879 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82879 /var/tmp/bdevperf.sock 01:02:29.468 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82879 ']' 01:02:29.469 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:29.469 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:29.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:29.469 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:29.469 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:29.469 06:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:29.469 [2024-12-09 06:01:23.910741] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:29.469 [2024-12-09 06:01:23.910830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82879 ] 01:02:29.740 [2024-12-09 06:01:24.059676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:29.740 [2024-12-09 06:01:24.091949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:29.740 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:29.740 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:29.740 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:30.024 [2024-12-09 06:01:24.376910] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2oyLjadhV8': 0100666 01:02:30.024 [2024-12-09 06:01:24.376947] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:02:30.024 2024/12/09 06:01:24 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.2oyLjadhV8], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:02:30.024 request: 01:02:30.024 { 01:02:30.024 "method": "keyring_file_add_key", 01:02:30.024 "params": { 01:02:30.024 "name": "key0", 01:02:30.024 "path": "/tmp/tmp.2oyLjadhV8" 01:02:30.024 } 01:02:30.024 } 01:02:30.024 Got JSON-RPC error response 01:02:30.024 GoRPCClient: error on JSON-RPC call 01:02:30.024 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:02:30.297 [2024-12-09 06:01:24.677124] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:30.297 [2024-12-09 06:01:24.677207] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 01:02:30.297 2024/12/09 06:01:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 01:02:30.297 request: 01:02:30.297 { 01:02:30.297 "method": "bdev_nvme_attach_controller", 01:02:30.297 "params": { 01:02:30.297 "name": "TLSTEST", 01:02:30.297 "trtype": "tcp", 01:02:30.297 "traddr": "10.0.0.3", 01:02:30.297 "adrfam": "ipv4", 01:02:30.297 "trsvcid": "4420", 01:02:30.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:30.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:30.297 "prchk_reftag": false, 01:02:30.297 "prchk_guard": false, 01:02:30.297 "hdgst": false, 01:02:30.297 "ddgst": false, 01:02:30.297 "psk": "key0", 01:02:30.297 "allow_unrecognized_csi": false 01:02:30.297 } 01:02:30.297 } 01:02:30.297 Got JSON-RPC error response 01:02:30.297 GoRPCClient: error on JSON-RPC call 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82879 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82879 ']' 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82879 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82879 01:02:30.297 killing process with pid 82879 01:02:30.297 Received shutdown signal, test time was about 10.000000 seconds 01:02:30.297 01:02:30.297 Latency(us) 01:02:30.297 [2024-12-09T06:01:24.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:30.297 [2024-12-09T06:01:24.883Z] =================================================================================================================== 01:02:30.297 [2024-12-09T06:01:24.883Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82879' 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82879 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82879 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 82642 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82642 ']' 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82642 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82642 01:02:30.297 killing process with pid 82642 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82642' 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82642 01:02:30.297 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82642 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82923 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82923 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82923 ']' 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:30.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:30.556 06:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:30.556 [2024-12-09 06:01:25.062932] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:30.556 [2024-12-09 06:01:25.063034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:30.814 [2024-12-09 06:01:25.212600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:30.814 [2024-12-09 06:01:25.241778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:30.814 [2024-12-09 06:01:25.241820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:30.814 [2024-12-09 06:01:25.241846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:30.814 [2024-12-09 06:01:25.241853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:30.814 [2024-12-09 06:01:25.241859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:30.814 [2024-12-09 06:01:25.242178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.2oyLjadhV8 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.2oyLjadhV8 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.2oyLjadhV8 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2oyLjadhV8 01:02:31.750 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:02:31.750 [2024-12-09 06:01:26.325363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:32.009 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:02:32.009 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:02:32.269 [2024-12-09 06:01:26.841463] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:02:32.269 [2024-12-09 06:01:26.841708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:32.529 06:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:02:32.529 malloc0 01:02:32.529 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:02:32.789 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:33.049 [2024-12-09 06:01:27.507408] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2oyLjadhV8': 0100666 01:02:33.049 [2024-12-09 06:01:27.507459] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:02:33.049 2024/12/09 06:01:27 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.2oyLjadhV8], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:02:33.049 request: 01:02:33.049 { 01:02:33.049 "method": "keyring_file_add_key", 01:02:33.049 "params": { 01:02:33.049 "name": "key0", 01:02:33.049 "path": "/tmp/tmp.2oyLjadhV8" 01:02:33.049 } 01:02:33.049 } 01:02:33.049 Got JSON-RPC error response 01:02:33.049 GoRPCClient: error on JSON-RPC call 01:02:33.049 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:02:33.309 [2024-12-09 06:01:27.795490] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 01:02:33.309 [2024-12-09 06:01:27.795554] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 01:02:33.309 2024/12/09 06:01:27 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 01:02:33.309 request: 01:02:33.309 { 01:02:33.309 "method": "nvmf_subsystem_add_host", 01:02:33.309 "params": { 01:02:33.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:33.309 "host": "nqn.2016-06.io.spdk:host1", 01:02:33.309 "psk": "key0" 01:02:33.309 } 01:02:33.309 } 01:02:33.309 Got JSON-RPC error response 01:02:33.309 GoRPCClient: error on JSON-RPC call 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 82923 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82923 ']' 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82923 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82923 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:33.309 killing process with pid 82923 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82923' 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82923 01:02:33.309 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82923 01:02:33.569 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.2oyLjadhV8 01:02:33.569 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 01:02:33.569 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:33.569 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:33.569 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:33.569 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83049 01:02:33.570 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83049 01:02:33.570 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:02:33.570 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83049 ']' 01:02:33.570 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:33.570 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:33.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:33.570 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:33.570 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:33.570 06:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:33.570 [2024-12-09 06:01:28.045864] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:33.570 [2024-12-09 06:01:28.045978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:33.829 [2024-12-09 06:01:28.191508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:33.829 [2024-12-09 06:01:28.219629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:33.829 [2024-12-09 06:01:28.219707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:33.829 [2024-12-09 06:01:28.219732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:33.829 [2024-12-09 06:01:28.219739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:33.829 [2024-12-09 06:01:28.219745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:33.829 [2024-12-09 06:01:28.220084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:34.397 06:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:34.397 06:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:34.397 06:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:34.397 06:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:34.397 06:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:34.656 06:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:34.656 06:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.2oyLjadhV8 01:02:34.656 06:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2oyLjadhV8 01:02:34.656 06:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:02:34.915 [2024-12-09 06:01:29.279046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:34.915 06:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:02:35.172 06:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:02:35.429 [2024-12-09 06:01:29.811206] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:02:35.429 [2024-12-09 06:01:29.811459] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:35.429 06:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:02:35.687 malloc0 01:02:35.687 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:02:35.946 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:36.204 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:02:36.462 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83160 01:02:36.462 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:02:36.462 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:36.463 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83160 /var/tmp/bdevperf.sock 01:02:36.463 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83160 ']' 01:02:36.463 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:36.463 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:36.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:36.463 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:36.463 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:36.463 06:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:36.463 [2024-12-09 06:01:30.919332] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:36.463 [2024-12-09 06:01:30.919434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83160 ] 01:02:36.720 [2024-12-09 06:01:31.071841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:36.720 [2024-12-09 06:01:31.110735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:36.720 06:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:36.720 06:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:36.720 06:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:36.978 06:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:02:37.235 [2024-12-09 06:01:31.619250] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:37.235 TLSTESTn1 01:02:37.235 06:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 01:02:37.801 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 01:02:37.801 "subsystems": [ 01:02:37.801 { 01:02:37.801 "subsystem": "keyring", 01:02:37.801 "config": [ 01:02:37.801 { 01:02:37.801 "method": "keyring_file_add_key", 01:02:37.801 "params": { 01:02:37.801 "name": "key0", 01:02:37.801 "path": "/tmp/tmp.2oyLjadhV8" 01:02:37.801 } 01:02:37.801 } 01:02:37.801 ] 01:02:37.801 }, 01:02:37.801 { 01:02:37.801 "subsystem": "iobuf", 01:02:37.801 "config": [ 01:02:37.801 { 01:02:37.801 "method": "iobuf_set_options", 01:02:37.801 "params": { 01:02:37.801 "enable_numa": false, 01:02:37.801 "large_bufsize": 135168, 01:02:37.801 "large_pool_count": 1024, 01:02:37.801 "small_bufsize": 8192, 01:02:37.801 "small_pool_count": 8192 01:02:37.801 } 01:02:37.801 } 01:02:37.801 ] 01:02:37.801 }, 01:02:37.801 { 01:02:37.801 "subsystem": "sock", 01:02:37.801 "config": [ 01:02:37.801 { 01:02:37.801 "method": "sock_set_default_impl", 01:02:37.801 "params": { 01:02:37.801 "impl_name": "posix" 01:02:37.801 } 01:02:37.801 }, 01:02:37.801 { 01:02:37.801 "method": "sock_impl_set_options", 01:02:37.801 "params": { 01:02:37.801 "enable_ktls": false, 01:02:37.801 "enable_placement_id": 0, 01:02:37.801 "enable_quickack": false, 01:02:37.801 "enable_recv_pipe": true, 01:02:37.801 "enable_zerocopy_send_client": false, 01:02:37.801 "enable_zerocopy_send_server": true, 01:02:37.802 "impl_name": "ssl", 01:02:37.802 "recv_buf_size": 4096, 01:02:37.802 "send_buf_size": 4096, 01:02:37.802 "tls_version": 0, 01:02:37.802 "zerocopy_threshold": 0 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "sock_impl_set_options", 01:02:37.802 "params": { 01:02:37.802 "enable_ktls": false, 01:02:37.802 "enable_placement_id": 0, 01:02:37.802 "enable_quickack": false, 01:02:37.802 "enable_recv_pipe": true, 01:02:37.802 "enable_zerocopy_send_client": false, 01:02:37.802 "enable_zerocopy_send_server": true, 01:02:37.802 "impl_name": "posix", 01:02:37.802 "recv_buf_size": 2097152, 01:02:37.802 "send_buf_size": 2097152, 01:02:37.802 "tls_version": 0, 01:02:37.802 "zerocopy_threshold": 0 01:02:37.802 } 01:02:37.802 } 01:02:37.802 ] 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "subsystem": "vmd", 01:02:37.802 "config": [] 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "subsystem": "accel", 01:02:37.802 "config": [ 01:02:37.802 { 01:02:37.802 "method": "accel_set_options", 01:02:37.802 "params": { 01:02:37.802 "buf_count": 2048, 01:02:37.802 "large_cache_size": 16, 01:02:37.802 "sequence_count": 2048, 01:02:37.802 "small_cache_size": 128, 01:02:37.802 "task_count": 2048 01:02:37.802 } 01:02:37.802 } 01:02:37.802 ] 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "subsystem": "bdev", 01:02:37.802 "config": [ 01:02:37.802 { 01:02:37.802 "method": "bdev_set_options", 01:02:37.802 "params": { 01:02:37.802 "bdev_auto_examine": true, 01:02:37.802 "bdev_io_cache_size": 256, 01:02:37.802 "bdev_io_pool_size": 65535, 01:02:37.802 "iobuf_large_cache_size": 16, 01:02:37.802 "iobuf_small_cache_size": 128 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "bdev_raid_set_options", 01:02:37.802 "params": { 01:02:37.802 "process_max_bandwidth_mb_sec": 0, 01:02:37.802 "process_window_size_kb": 1024 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "bdev_iscsi_set_options", 01:02:37.802 "params": { 01:02:37.802 "timeout_sec": 30 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "bdev_nvme_set_options", 01:02:37.802 "params": { 01:02:37.802 "action_on_timeout": "none", 01:02:37.802 "allow_accel_sequence": false, 01:02:37.802 "arbitration_burst": 0, 01:02:37.802 "bdev_retry_count": 3, 01:02:37.802 "ctrlr_loss_timeout_sec": 0, 01:02:37.802 "delay_cmd_submit": true, 01:02:37.802 "dhchap_dhgroups": [ 01:02:37.802 "null", 01:02:37.802 "ffdhe2048", 01:02:37.802 "ffdhe3072", 01:02:37.802 "ffdhe4096", 01:02:37.802 "ffdhe6144", 01:02:37.802 "ffdhe8192" 01:02:37.802 ], 01:02:37.802 "dhchap_digests": [ 01:02:37.802 "sha256", 01:02:37.802 "sha384", 01:02:37.802 "sha512" 01:02:37.802 ], 01:02:37.802 "disable_auto_failback": false, 01:02:37.802 "fast_io_fail_timeout_sec": 0, 01:02:37.802 "generate_uuids": false, 01:02:37.802 "high_priority_weight": 0, 01:02:37.802 "io_path_stat": false, 01:02:37.802 "io_queue_requests": 0, 01:02:37.802 "keep_alive_timeout_ms": 10000, 01:02:37.802 "low_priority_weight": 0, 01:02:37.802 "medium_priority_weight": 0, 01:02:37.802 "nvme_adminq_poll_period_us": 10000, 01:02:37.802 "nvme_error_stat": false, 01:02:37.802 "nvme_ioq_poll_period_us": 0, 01:02:37.802 "rdma_cm_event_timeout_ms": 0, 01:02:37.802 "rdma_max_cq_size": 0, 01:02:37.802 "rdma_srq_size": 0, 01:02:37.802 "reconnect_delay_sec": 0, 01:02:37.802 "timeout_admin_us": 0, 01:02:37.802 "timeout_us": 0, 01:02:37.802 "transport_ack_timeout": 0, 01:02:37.802 "transport_retry_count": 4, 01:02:37.802 "transport_tos": 0 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "bdev_nvme_set_hotplug", 01:02:37.802 "params": { 01:02:37.802 "enable": false, 01:02:37.802 "period_us": 100000 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "bdev_malloc_create", 01:02:37.802 "params": { 01:02:37.802 "block_size": 4096, 01:02:37.802 "dif_is_head_of_md": false, 01:02:37.802 "dif_pi_format": 0, 01:02:37.802 "dif_type": 0, 01:02:37.802 "md_size": 0, 01:02:37.802 "name": "malloc0", 01:02:37.802 "num_blocks": 8192, 01:02:37.802 "optimal_io_boundary": 0, 01:02:37.802 "physical_block_size": 4096, 01:02:37.802 "uuid": "bbe7d11e-5a0f-4ba1-b35f-2230f111212e" 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "bdev_wait_for_examine" 01:02:37.802 } 01:02:37.802 ] 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "subsystem": "nbd", 01:02:37.802 "config": [] 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "subsystem": "scheduler", 01:02:37.802 "config": [ 01:02:37.802 { 01:02:37.802 "method": "framework_set_scheduler", 01:02:37.802 "params": { 01:02:37.802 "name": "static" 01:02:37.802 } 01:02:37.802 } 01:02:37.802 ] 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "subsystem": "nvmf", 01:02:37.802 "config": [ 01:02:37.802 { 01:02:37.802 "method": "nvmf_set_config", 01:02:37.802 "params": { 01:02:37.802 "admin_cmd_passthru": { 01:02:37.802 "identify_ctrlr": false 01:02:37.802 }, 01:02:37.802 "dhchap_dhgroups": [ 01:02:37.802 "null", 01:02:37.802 "ffdhe2048", 01:02:37.802 "ffdhe3072", 01:02:37.802 "ffdhe4096", 01:02:37.802 "ffdhe6144", 01:02:37.802 "ffdhe8192" 01:02:37.802 ], 01:02:37.802 "dhchap_digests": [ 01:02:37.802 "sha256", 01:02:37.802 "sha384", 01:02:37.802 "sha512" 01:02:37.802 ], 01:02:37.802 "discovery_filter": "match_any" 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "nvmf_set_max_subsystems", 01:02:37.802 "params": { 01:02:37.802 "max_subsystems": 1024 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "nvmf_set_crdt", 01:02:37.802 "params": { 01:02:37.802 "crdt1": 0, 01:02:37.802 "crdt2": 0, 01:02:37.802 "crdt3": 0 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "nvmf_create_transport", 01:02:37.802 "params": { 01:02:37.802 "abort_timeout_sec": 1, 01:02:37.802 "ack_timeout": 0, 01:02:37.802 "buf_cache_size": 4294967295, 01:02:37.802 "c2h_success": false, 01:02:37.802 "data_wr_pool_size": 0, 01:02:37.802 "dif_insert_or_strip": false, 01:02:37.802 "in_capsule_data_size": 4096, 01:02:37.802 "io_unit_size": 131072, 01:02:37.802 "max_aq_depth": 128, 01:02:37.802 "max_io_qpairs_per_ctrlr": 127, 01:02:37.802 "max_io_size": 131072, 01:02:37.802 "max_queue_depth": 128, 01:02:37.802 "num_shared_buffers": 511, 01:02:37.802 "sock_priority": 0, 01:02:37.802 "trtype": "TCP", 01:02:37.802 "zcopy": false 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "nvmf_create_subsystem", 01:02:37.802 "params": { 01:02:37.802 "allow_any_host": false, 01:02:37.802 "ana_reporting": false, 01:02:37.802 "max_cntlid": 65519, 01:02:37.802 "max_namespaces": 10, 01:02:37.802 "min_cntlid": 1, 01:02:37.802 "model_number": "SPDK bdev Controller", 01:02:37.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:37.802 "serial_number": "SPDK00000000000001" 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.802 "method": "nvmf_subsystem_add_host", 01:02:37.802 "params": { 01:02:37.802 "host": "nqn.2016-06.io.spdk:host1", 01:02:37.802 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:37.802 "psk": "key0" 01:02:37.802 } 01:02:37.802 }, 01:02:37.802 { 01:02:37.803 "method": "nvmf_subsystem_add_ns", 01:02:37.803 "params": { 01:02:37.803 "namespace": { 01:02:37.803 "bdev_name": "malloc0", 01:02:37.803 "nguid": "BBE7D11E5A0F4BA1B35F2230F111212E", 01:02:37.803 "no_auto_visible": false, 01:02:37.803 "nsid": 1, 01:02:37.803 "uuid": "bbe7d11e-5a0f-4ba1-b35f-2230f111212e" 01:02:37.803 }, 01:02:37.803 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "nvmf_subsystem_add_listener", 01:02:37.803 "params": { 01:02:37.803 "listen_address": { 01:02:37.803 "adrfam": "IPv4", 01:02:37.803 "traddr": "10.0.0.3", 01:02:37.803 "trsvcid": "4420", 01:02:37.803 "trtype": "TCP" 01:02:37.803 }, 01:02:37.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:37.803 "secure_channel": true 01:02:37.803 } 01:02:37.803 } 01:02:37.803 ] 01:02:37.803 } 01:02:37.803 ] 01:02:37.803 }' 01:02:37.803 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:02:37.803 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 01:02:37.803 "subsystems": [ 01:02:37.803 { 01:02:37.803 "subsystem": "keyring", 01:02:37.803 "config": [ 01:02:37.803 { 01:02:37.803 "method": "keyring_file_add_key", 01:02:37.803 "params": { 01:02:37.803 "name": "key0", 01:02:37.803 "path": "/tmp/tmp.2oyLjadhV8" 01:02:37.803 } 01:02:37.803 } 01:02:37.803 ] 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "subsystem": "iobuf", 01:02:37.803 "config": [ 01:02:37.803 { 01:02:37.803 "method": "iobuf_set_options", 01:02:37.803 "params": { 01:02:37.803 "enable_numa": false, 01:02:37.803 "large_bufsize": 135168, 01:02:37.803 "large_pool_count": 1024, 01:02:37.803 "small_bufsize": 8192, 01:02:37.803 "small_pool_count": 8192 01:02:37.803 } 01:02:37.803 } 01:02:37.803 ] 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "subsystem": "sock", 01:02:37.803 "config": [ 01:02:37.803 { 01:02:37.803 "method": "sock_set_default_impl", 01:02:37.803 "params": { 01:02:37.803 "impl_name": "posix" 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "sock_impl_set_options", 01:02:37.803 "params": { 01:02:37.803 "enable_ktls": false, 01:02:37.803 "enable_placement_id": 0, 01:02:37.803 "enable_quickack": false, 01:02:37.803 "enable_recv_pipe": true, 01:02:37.803 "enable_zerocopy_send_client": false, 01:02:37.803 "enable_zerocopy_send_server": true, 01:02:37.803 "impl_name": "ssl", 01:02:37.803 "recv_buf_size": 4096, 01:02:37.803 "send_buf_size": 4096, 01:02:37.803 "tls_version": 0, 01:02:37.803 "zerocopy_threshold": 0 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "sock_impl_set_options", 01:02:37.803 "params": { 01:02:37.803 "enable_ktls": false, 01:02:37.803 "enable_placement_id": 0, 01:02:37.803 "enable_quickack": false, 01:02:37.803 "enable_recv_pipe": true, 01:02:37.803 "enable_zerocopy_send_client": false, 01:02:37.803 "enable_zerocopy_send_server": true, 01:02:37.803 "impl_name": "posix", 01:02:37.803 "recv_buf_size": 2097152, 01:02:37.803 "send_buf_size": 2097152, 01:02:37.803 "tls_version": 0, 01:02:37.803 "zerocopy_threshold": 0 01:02:37.803 } 01:02:37.803 } 01:02:37.803 ] 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "subsystem": "vmd", 01:02:37.803 "config": [] 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "subsystem": "accel", 01:02:37.803 "config": [ 01:02:37.803 { 01:02:37.803 "method": "accel_set_options", 01:02:37.803 "params": { 01:02:37.803 "buf_count": 2048, 01:02:37.803 "large_cache_size": 16, 01:02:37.803 "sequence_count": 2048, 01:02:37.803 "small_cache_size": 128, 01:02:37.803 "task_count": 2048 01:02:37.803 } 01:02:37.803 } 01:02:37.803 ] 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "subsystem": "bdev", 01:02:37.803 "config": [ 01:02:37.803 { 01:02:37.803 "method": "bdev_set_options", 01:02:37.803 "params": { 01:02:37.803 "bdev_auto_examine": true, 01:02:37.803 "bdev_io_cache_size": 256, 01:02:37.803 "bdev_io_pool_size": 65535, 01:02:37.803 "iobuf_large_cache_size": 16, 01:02:37.803 "iobuf_small_cache_size": 128 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "bdev_raid_set_options", 01:02:37.803 "params": { 01:02:37.803 "process_max_bandwidth_mb_sec": 0, 01:02:37.803 "process_window_size_kb": 1024 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "bdev_iscsi_set_options", 01:02:37.803 "params": { 01:02:37.803 "timeout_sec": 30 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "bdev_nvme_set_options", 01:02:37.803 "params": { 01:02:37.803 "action_on_timeout": "none", 01:02:37.803 "allow_accel_sequence": false, 01:02:37.803 "arbitration_burst": 0, 01:02:37.803 "bdev_retry_count": 3, 01:02:37.803 "ctrlr_loss_timeout_sec": 0, 01:02:37.803 "delay_cmd_submit": true, 01:02:37.803 "dhchap_dhgroups": [ 01:02:37.803 "null", 01:02:37.803 "ffdhe2048", 01:02:37.803 "ffdhe3072", 01:02:37.803 "ffdhe4096", 01:02:37.803 "ffdhe6144", 01:02:37.803 "ffdhe8192" 01:02:37.803 ], 01:02:37.803 "dhchap_digests": [ 01:02:37.803 "sha256", 01:02:37.803 "sha384", 01:02:37.803 "sha512" 01:02:37.803 ], 01:02:37.803 "disable_auto_failback": false, 01:02:37.803 "fast_io_fail_timeout_sec": 0, 01:02:37.803 "generate_uuids": false, 01:02:37.803 "high_priority_weight": 0, 01:02:37.803 "io_path_stat": false, 01:02:37.803 "io_queue_requests": 512, 01:02:37.803 "keep_alive_timeout_ms": 10000, 01:02:37.803 "low_priority_weight": 0, 01:02:37.803 "medium_priority_weight": 0, 01:02:37.803 "nvme_adminq_poll_period_us": 10000, 01:02:37.803 "nvme_error_stat": false, 01:02:37.803 "nvme_ioq_poll_period_us": 0, 01:02:37.803 "rdma_cm_event_timeout_ms": 0, 01:02:37.803 "rdma_max_cq_size": 0, 01:02:37.803 "rdma_srq_size": 0, 01:02:37.803 "reconnect_delay_sec": 0, 01:02:37.803 "timeout_admin_us": 0, 01:02:37.803 "timeout_us": 0, 01:02:37.803 "transport_ack_timeout": 0, 01:02:37.803 "transport_retry_count": 4, 01:02:37.803 "transport_tos": 0 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "bdev_nvme_attach_controller", 01:02:37.803 "params": { 01:02:37.803 "adrfam": "IPv4", 01:02:37.803 "ctrlr_loss_timeout_sec": 0, 01:02:37.803 "ddgst": false, 01:02:37.803 "fast_io_fail_timeout_sec": 0, 01:02:37.803 "hdgst": false, 01:02:37.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:37.803 "multipath": "multipath", 01:02:37.803 "name": "TLSTEST", 01:02:37.803 "prchk_guard": false, 01:02:37.803 "prchk_reftag": false, 01:02:37.803 "psk": "key0", 01:02:37.803 "reconnect_delay_sec": 0, 01:02:37.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:37.803 "traddr": "10.0.0.3", 01:02:37.803 "trsvcid": "4420", 01:02:37.803 "trtype": "TCP" 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "bdev_nvme_set_hotplug", 01:02:37.803 "params": { 01:02:37.803 "enable": false, 01:02:37.803 "period_us": 100000 01:02:37.803 } 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "method": "bdev_wait_for_examine" 01:02:37.803 } 01:02:37.803 ] 01:02:37.803 }, 01:02:37.803 { 01:02:37.803 "subsystem": "nbd", 01:02:37.803 "config": [] 01:02:37.803 } 01:02:37.803 ] 01:02:37.803 }' 01:02:37.803 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83160 01:02:37.803 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83160 ']' 01:02:37.803 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83160 01:02:37.803 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83160 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:38.062 killing process with pid 83160 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83160' 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83160 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83160 01:02:38.062 Received shutdown signal, test time was about 10.000000 seconds 01:02:38.062 01:02:38.062 Latency(us) 01:02:38.062 [2024-12-09T06:01:32.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:38.062 [2024-12-09T06:01:32.648Z] =================================================================================================================== 01:02:38.062 [2024-12-09T06:01:32.648Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83049 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83049 ']' 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83049 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83049 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:38.062 killing process with pid 83049 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83049' 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83049 01:02:38.062 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83049 01:02:38.321 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 01:02:38.321 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 01:02:38.321 "subsystems": [ 01:02:38.321 { 01:02:38.321 "subsystem": "keyring", 01:02:38.321 "config": [ 01:02:38.321 { 01:02:38.321 "method": "keyring_file_add_key", 01:02:38.321 "params": { 01:02:38.321 "name": "key0", 01:02:38.321 "path": "/tmp/tmp.2oyLjadhV8" 01:02:38.322 } 01:02:38.322 } 01:02:38.322 ] 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "subsystem": "iobuf", 01:02:38.322 "config": [ 01:02:38.322 { 01:02:38.322 "method": "iobuf_set_options", 01:02:38.322 "params": { 01:02:38.322 "enable_numa": false, 01:02:38.322 "large_bufsize": 135168, 01:02:38.322 "large_pool_count": 1024, 01:02:38.322 "small_bufsize": 8192, 01:02:38.322 "small_pool_count": 8192 01:02:38.322 } 01:02:38.322 } 01:02:38.322 ] 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "subsystem": "sock", 01:02:38.322 "config": [ 01:02:38.322 { 01:02:38.322 "method": "sock_set_default_impl", 01:02:38.322 "params": { 01:02:38.322 "impl_name": "posix" 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "sock_impl_set_options", 01:02:38.322 "params": { 01:02:38.322 "enable_ktls": false, 01:02:38.322 "enable_placement_id": 0, 01:02:38.322 "enable_quickack": false, 01:02:38.322 "enable_recv_pipe": true, 01:02:38.322 "enable_zerocopy_send_client": false, 01:02:38.322 "enable_zerocopy_send_server": true, 01:02:38.322 "impl_name": "ssl", 01:02:38.322 "recv_buf_size": 4096, 01:02:38.322 "send_buf_size": 4096, 01:02:38.322 "tls_version": 0, 01:02:38.322 "zerocopy_threshold": 0 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "sock_impl_set_options", 01:02:38.322 "params": { 01:02:38.322 "enable_ktls": false, 01:02:38.322 "enable_placement_id": 0, 01:02:38.322 "enable_quickack": false, 01:02:38.322 "enable_recv_pipe": true, 01:02:38.322 "enable_zerocopy_send_client": false, 01:02:38.322 "enable_zerocopy_send_server": true, 01:02:38.322 "impl_name": "posix", 01:02:38.322 "recv_buf_size": 2097152, 01:02:38.322 "send_buf_size": 2097152, 01:02:38.322 "tls_version": 0, 01:02:38.322 "zerocopy_threshold": 0 01:02:38.322 } 01:02:38.322 } 01:02:38.322 ] 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "subsystem": "vmd", 01:02:38.322 "config": [] 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "subsystem": "accel", 01:02:38.322 "config": [ 01:02:38.322 { 01:02:38.322 "method": "accel_set_options", 01:02:38.322 "params": { 01:02:38.322 "buf_count": 2048, 01:02:38.322 "large_cache_size": 16, 01:02:38.322 "sequence_count": 2048, 01:02:38.322 "small_cache_size": 128, 01:02:38.322 "task_count": 2048 01:02:38.322 } 01:02:38.322 } 01:02:38.322 ] 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "subsystem": "bdev", 01:02:38.322 "config": [ 01:02:38.322 { 01:02:38.322 "method": "bdev_set_options", 01:02:38.322 "params": { 01:02:38.322 "bdev_auto_examine": true, 01:02:38.322 "bdev_io_cache_size": 256, 01:02:38.322 "bdev_io_pool_size": 65535, 01:02:38.322 "iobuf_large_cache_size": 16, 01:02:38.322 "iobuf_small_cache_size": 128 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "bdev_raid_set_options", 01:02:38.322 "params": { 01:02:38.322 "process_max_bandwidth_mb_sec": 0, 01:02:38.322 "process_window_size_kb": 1024 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "bdev_iscsi_set_options", 01:02:38.322 "params": { 01:02:38.322 "timeout_sec": 30 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "bdev_nvme_set_options", 01:02:38.322 "params": { 01:02:38.322 "action_on_timeout": "none", 01:02:38.322 "allow_accel_sequence": false, 01:02:38.322 "arbitration_burst": 0, 01:02:38.322 "bdev_retry_count": 3, 01:02:38.322 "ctrlr_loss_timeout_sec": 0, 01:02:38.322 "delay_cmd_submit": true, 01:02:38.322 "dhchap_dhgroups": [ 01:02:38.322 "null", 01:02:38.322 "ffdhe2048", 01:02:38.322 "ffdhe3072", 01:02:38.322 "ffdhe4096", 01:02:38.322 "ffdhe6144", 01:02:38.322 "ffdhe8192" 01:02:38.322 ], 01:02:38.322 "dhchap_digests": [ 01:02:38.322 "sha256", 01:02:38.322 "sha384", 01:02:38.322 "sha512" 01:02:38.322 ], 01:02:38.322 "disable_auto_failback": false, 01:02:38.322 "fast_io_fail_timeout_sec": 0, 01:02:38.322 "generate_uuids": false, 01:02:38.322 "high_priority_weight": 0, 01:02:38.322 "io_path_stat": false, 01:02:38.322 "io_queue_requests": 0, 01:02:38.322 "keep_alive_timeout_ms": 10000, 01:02:38.322 "low_priority_weight": 0, 01:02:38.322 "medium_priority_weight": 0, 01:02:38.322 "nvme_adminq_poll_period_us": 10000, 01:02:38.322 "nvme_error_stat": false, 01:02:38.322 "nvme_ioq_poll_period_us": 0, 01:02:38.322 "rdma_cm_event_timeout_ms": 0, 01:02:38.322 "rdma_max_cq_size": 0, 01:02:38.322 "rdma_srq_size": 0, 01:02:38.322 "reconnect_delay_sec": 0, 01:02:38.322 "timeout_admin_us": 0, 01:02:38.322 "timeout_us": 0, 01:02:38.322 "transport_ack_timeout": 0, 01:02:38.322 "transport_retry_count": 4, 01:02:38.322 "transport_tos": 0 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "bdev_nvme_set_hotplug", 01:02:38.322 "params": { 01:02:38.322 "enable": false, 01:02:38.322 "period_us": 100000 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "bdev_malloc_create", 01:02:38.322 "params": { 01:02:38.322 "block_size": 4096, 01:02:38.322 "dif_is_head_of_md": false, 01:02:38.322 "dif_pi_format": 0, 01:02:38.322 "dif_type": 0, 01:02:38.322 "md_size": 0, 01:02:38.322 "name": "malloc0", 01:02:38.322 "num_blocks": 8192, 01:02:38.322 "optimal_io_boundary": 0, 01:02:38.322 "physical_block_size": 4096, 01:02:38.322 "uuid": "bbe7d11e-5a0f-4ba1-b35f-2230f111212e" 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "bdev_wait_for_examine" 01:02:38.322 } 01:02:38.322 ] 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "subsystem": "nbd", 01:02:38.322 "config": [] 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "subsystem": "scheduler", 01:02:38.322 "config": [ 01:02:38.322 { 01:02:38.322 "method": "framework_set_scheduler", 01:02:38.322 "params": { 01:02:38.322 "name": "static" 01:02:38.322 } 01:02:38.322 } 01:02:38.322 ] 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "subsystem": "nvmf", 01:02:38.322 "config": [ 01:02:38.322 { 01:02:38.322 "method": "nvmf_set_config", 01:02:38.322 "params": { 01:02:38.322 "admin_cmd_passthru": { 01:02:38.322 "identify_ctrlr": false 01:02:38.322 }, 01:02:38.322 "dhchap_dhgroups": [ 01:02:38.322 "null", 01:02:38.322 "ffdhe2048", 01:02:38.322 "ffdhe3072", 01:02:38.322 "ffdhe4096", 01:02:38.322 "ffdhe6144", 01:02:38.322 "ffdhe8192" 01:02:38.322 ], 01:02:38.322 "dhchap_digests": [ 01:02:38.322 "sha256", 01:02:38.322 "sha384", 01:02:38.322 "sha512" 01:02:38.322 ], 01:02:38.322 "discovery_filter": "match_any" 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "nvmf_set_max_subsystems", 01:02:38.322 "params": { 01:02:38.322 "max_subsystems": 1024 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "nvmf_set_crdt", 01:02:38.322 "params": { 01:02:38.322 "crdt1": 0, 01:02:38.322 "crdt2": 0, 01:02:38.322 "crdt3": 0 01:02:38.322 } 01:02:38.322 }, 01:02:38.322 { 01:02:38.322 "method": "nvmf_create_transport", 01:02:38.322 "params": { 01:02:38.322 "abort_timeout_sec": 1, 01:02:38.322 "ack_timeout": 0, 01:02:38.322 "buf_cache_size": 4294967295, 01:02:38.323 "c2h_success": false, 01:02:38.323 "data_wr_pool_size": 0, 01:02:38.323 "dif_insert_or_strip": false, 01:02:38.323 "in_capsule_data_size": 4096, 01:02:38.323 "io_unit_size": 131072, 01:02:38.323 "max_aq_depth": 128, 01:02:38.323 "max_io_qpairs_per_ctrlr": 127, 01:02:38.323 "max_io_size": 131072, 01:02:38.323 "max_queue_depth": 128, 01:02:38.323 "num_shared_buffers": 511, 01:02:38.323 "sock_priority": 0, 01:02:38.323 "trtype": "TCP", 01:02:38.323 "zcopy": false 01:02:38.323 } 01:02:38.323 }, 01:02:38.323 { 01:02:38.323 "method": "nvmf_create_subsystem", 01:02:38.323 "params": { 01:02:38.323 "allow_any_host": false, 01:02:38.323 "ana_reporting": false, 01:02:38.323 "max_cntlid": 65519, 01:02:38.323 "max_namespaces": 10, 01:02:38.323 "min_cntlid": 1, 01:02:38.323 "model_number": "SPDK bdev Controller", 01:02:38.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:38.323 "serial_number": "SPDK00000000000001" 01:02:38.323 } 01:02:38.323 }, 01:02:38.323 { 01:02:38.323 "method": "nvmf_subsystem_add_host", 01:02:38.323 "params": { 01:02:38.323 "host": "nqn.2016-06.io.spdk:host1", 01:02:38.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:38.323 "psk": "key0" 01:02:38.323 } 01:02:38.323 }, 01:02:38.323 { 01:02:38.323 "method": "nvmf_subsystem_add_ns", 01:02:38.323 "params": { 01:02:38.323 "namespace": { 01:02:38.323 "bdev_name": "malloc0", 01:02:38.323 "nguid": "BBE7D11E5A0F4BA1B35F2230F111212E", 01:02:38.323 "no_auto_visible": false, 01:02:38.323 "nsid": 1, 01:02:38.323 "uuid": "bbe7d11e-5a0f-4ba1-b35f-2230f111212e" 01:02:38.323 }, 01:02:38.323 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:02:38.323 } 01:02:38.323 }, 01:02:38.323 { 01:02:38.323 "method": "nvmf_subsystem_add_listener", 01:02:38.323 "params": { 01:02:38.323 "listen_address": { 01:02:38.323 "adrfam": "IPv4", 01:02:38.323 "traddr": "10.0.0.3", 01:02:38.323 "trsvcid": "4420", 01:02:38.323 "trtype": "TCP" 01:02:38.323 }, 01:02:38.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:38.323 "secure_channel": true 01:02:38.323 } 01:02:38.323 } 01:02:38.323 ] 01:02:38.323 } 01:02:38.323 ] 01:02:38.323 }' 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83231 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83231 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83231 ']' 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:38.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:38.323 06:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:38.323 [2024-12-09 06:01:32.744005] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:38.323 [2024-12-09 06:01:32.744132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:38.323 [2024-12-09 06:01:32.882653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:38.582 [2024-12-09 06:01:32.912177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:38.582 [2024-12-09 06:01:32.912241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:38.582 [2024-12-09 06:01:32.912266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:38.582 [2024-12-09 06:01:32.912288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:38.582 [2024-12-09 06:01:32.912295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:38.582 [2024-12-09 06:01:32.912675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:38.582 [2024-12-09 06:01:33.098103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:38.582 [2024-12-09 06:01:33.130051] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:02:38.582 [2024-12-09 06:01:33.130240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83275 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83275 /var/tmp/bdevperf.sock 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83275 ']' 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:39.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:39.150 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 01:02:39.150 "subsystems": [ 01:02:39.150 { 01:02:39.150 "subsystem": "keyring", 01:02:39.150 "config": [ 01:02:39.150 { 01:02:39.150 "method": "keyring_file_add_key", 01:02:39.150 "params": { 01:02:39.150 "name": "key0", 01:02:39.150 "path": "/tmp/tmp.2oyLjadhV8" 01:02:39.150 } 01:02:39.150 } 01:02:39.150 ] 01:02:39.150 }, 01:02:39.150 { 01:02:39.150 "subsystem": "iobuf", 01:02:39.150 "config": [ 01:02:39.150 { 01:02:39.150 "method": "iobuf_set_options", 01:02:39.150 "params": { 01:02:39.150 "enable_numa": false, 01:02:39.150 "large_bufsize": 135168, 01:02:39.150 "large_pool_count": 1024, 01:02:39.150 "small_bufsize": 8192, 01:02:39.150 "small_pool_count": 8192 01:02:39.150 } 01:02:39.150 } 01:02:39.150 ] 01:02:39.150 }, 01:02:39.150 { 01:02:39.150 "subsystem": "sock", 01:02:39.150 "config": [ 01:02:39.150 { 01:02:39.150 "method": "sock_set_default_impl", 01:02:39.150 "params": { 01:02:39.150 "impl_name": "posix" 01:02:39.150 } 01:02:39.150 }, 01:02:39.150 { 01:02:39.150 "method": "sock_impl_set_options", 01:02:39.150 "params": { 01:02:39.150 "enable_ktls": false, 01:02:39.150 "enable_placement_id": 0, 01:02:39.150 "enable_quickack": false, 01:02:39.150 "enable_recv_pipe": true, 01:02:39.150 "enable_zerocopy_send_client": false, 01:02:39.150 "enable_zerocopy_send_server": true, 01:02:39.150 "impl_name": "ssl", 01:02:39.150 "recv_buf_size": 4096, 01:02:39.150 "send_buf_size": 4096, 01:02:39.150 "tls_version": 0, 01:02:39.150 "zerocopy_threshold": 0 01:02:39.150 } 01:02:39.150 }, 01:02:39.150 { 01:02:39.150 "method": "sock_impl_set_options", 01:02:39.150 "params": { 01:02:39.150 "enable_ktls": false, 01:02:39.150 "enable_placement_id": 0, 01:02:39.150 "enable_quickack": false, 01:02:39.150 "enable_recv_pipe": true, 01:02:39.150 "enable_zerocopy_send_client": false, 01:02:39.150 "enable_zerocopy_send_server": true, 01:02:39.150 "impl_name": "posix", 01:02:39.150 "recv_buf_size": 2097152, 01:02:39.150 "send_buf_size": 2097152, 01:02:39.150 "tls_version": 0, 01:02:39.150 "zerocopy_threshold": 0 01:02:39.150 } 01:02:39.150 } 01:02:39.151 ] 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "subsystem": "vmd", 01:02:39.151 "config": [] 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "subsystem": "accel", 01:02:39.151 "config": [ 01:02:39.151 { 01:02:39.151 "method": "accel_set_options", 01:02:39.151 "params": { 01:02:39.151 "buf_count": 2048, 01:02:39.151 "large_cache_size": 16, 01:02:39.151 "sequence_count": 2048, 01:02:39.151 "small_cache_size": 128, 01:02:39.151 "task_count": 2048 01:02:39.151 } 01:02:39.151 } 01:02:39.151 ] 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "subsystem": "bdev", 01:02:39.151 "config": [ 01:02:39.151 { 01:02:39.151 "method": "bdev_set_options", 01:02:39.151 "params": { 01:02:39.151 "bdev_auto_examine": true, 01:02:39.151 "bdev_io_cache_size": 256, 01:02:39.151 "bdev_io_pool_size": 65535, 01:02:39.151 "iobuf_large_cache_size": 16, 01:02:39.151 "iobuf_small_cache_size": 128 01:02:39.151 } 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "method": "bdev_raid_set_options", 01:02:39.151 "params": { 01:02:39.151 "process_max_bandwidth_mb_sec": 0, 01:02:39.151 "process_window_size_kb": 1024 01:02:39.151 } 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "method": "bdev_iscsi_set_options", 01:02:39.151 "params": { 01:02:39.151 "timeout_sec": 30 01:02:39.151 } 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "method": "bdev_nvme_set_options", 01:02:39.151 "params": { 01:02:39.151 "action_on_timeout": "none", 01:02:39.151 "allow_accel_sequence": false, 01:02:39.151 "arbitration_burst": 0, 01:02:39.151 "bdev_retry_count": 3, 01:02:39.151 "ctrlr_loss_timeout_sec": 0, 01:02:39.151 "delay_cmd_submit": true, 01:02:39.151 "dhchap_dhgroups": [ 01:02:39.151 "null", 01:02:39.151 "ffdhe2048", 01:02:39.151 "ffdhe3072", 01:02:39.151 "ffdhe4096", 01:02:39.151 "ffdhe6144", 01:02:39.151 "ffdhe8192" 01:02:39.151 ], 01:02:39.151 "dhchap_digests": [ 01:02:39.151 "sha256", 01:02:39.151 "sha384", 01:02:39.151 "sha512" 01:02:39.151 ], 01:02:39.151 "disable_auto_failback": false, 01:02:39.151 "fast_io_fail_timeout_sec": 0, 01:02:39.151 "generate_uuids": false, 01:02:39.151 "high_priority_weight": 0, 01:02:39.151 "io_path_stat": false, 01:02:39.151 "io_queue_requests": 512, 01:02:39.151 "keep_alive_timeout_ms": 10000, 01:02:39.151 "low_priority_weight": 0, 01:02:39.151 "medium_priority_weight": 0, 01:02:39.151 "nvme_adminq_poll_period_us": 10000, 01:02:39.151 "nvme_error_stat": false, 01:02:39.151 "nvme_ioq_poll_period_us": 0, 01:02:39.151 06:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:39.151 "rdma_cm_event_timeout_ms": 0, 01:02:39.151 "rdma_max_cq_size": 0, 01:02:39.151 "rdma_srq_size": 0, 01:02:39.151 "reconnect_delay_sec": 0, 01:02:39.151 "timeout_admin_us": 0, 01:02:39.151 "timeout_us": 0, 01:02:39.151 "transport_ack_timeout": 0, 01:02:39.151 "transport_retry_count": 4, 01:02:39.151 "transport_tos": 0 01:02:39.151 } 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "method": "bdev_nvme_attach_controller", 01:02:39.151 "params": { 01:02:39.151 "adrfam": "IPv4", 01:02:39.151 "ctrlr_loss_timeout_sec": 0, 01:02:39.151 "ddgst": false, 01:02:39.151 "fast_io_fail_timeout_sec": 0, 01:02:39.151 "hdgst": false, 01:02:39.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:39.151 "multipath": "multipath", 01:02:39.151 "name": "TLSTEST", 01:02:39.151 "prchk_guard": false, 01:02:39.151 "prchk_reftag": false, 01:02:39.151 "psk": "key0", 01:02:39.151 "reconnect_delay_sec": 0, 01:02:39.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:39.151 "traddr": "10.0.0.3", 01:02:39.151 "trsvcid": "4420", 01:02:39.151 "trtype": "TCP" 01:02:39.151 } 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "method": "bdev_nvme_set_hotplug", 01:02:39.151 "params": { 01:02:39.151 "enable": false, 01:02:39.151 "period_us": 100000 01:02:39.151 } 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "method": "bdev_wait_for_examine" 01:02:39.151 } 01:02:39.151 ] 01:02:39.151 }, 01:02:39.151 { 01:02:39.151 "subsystem": "nbd", 01:02:39.151 "config": [] 01:02:39.151 } 01:02:39.151 ] 01:02:39.151 }' 01:02:39.409 [2024-12-09 06:01:33.775864] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:39.409 [2024-12-09 06:01:33.775967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83275 ] 01:02:39.409 [2024-12-09 06:01:33.928590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:39.410 [2024-12-09 06:01:33.967692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:02:39.668 [2024-12-09 06:01:34.109336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:40.238 06:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:40.238 06:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:40.238 06:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 01:02:40.238 Running I/O for 10 seconds... 01:02:42.547 4846.00 IOPS, 18.93 MiB/s [2024-12-09T06:01:38.066Z] 4898.00 IOPS, 19.13 MiB/s [2024-12-09T06:01:39.000Z] 4925.33 IOPS, 19.24 MiB/s [2024-12-09T06:01:39.934Z] 4939.75 IOPS, 19.30 MiB/s [2024-12-09T06:01:40.869Z] 4947.00 IOPS, 19.32 MiB/s [2024-12-09T06:01:42.244Z] 4942.50 IOPS, 19.31 MiB/s [2024-12-09T06:01:43.179Z] 4942.29 IOPS, 19.31 MiB/s [2024-12-09T06:01:44.113Z] 4946.50 IOPS, 19.32 MiB/s [2024-12-09T06:01:45.050Z] 4948.00 IOPS, 19.33 MiB/s [2024-12-09T06:01:45.050Z] 4954.40 IOPS, 19.35 MiB/s 01:02:50.464 Latency(us) 01:02:50.464 [2024-12-09T06:01:45.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:50.464 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:02:50.464 Verification LBA range: start 0x0 length 0x2000 01:02:50.464 TLSTESTn1 : 10.01 4960.22 19.38 0.00 0.00 25761.90 4766.25 22639.71 01:02:50.464 [2024-12-09T06:01:45.050Z] =================================================================================================================== 01:02:50.464 [2024-12-09T06:01:45.050Z] Total : 4960.22 19.38 0.00 0.00 25761.90 4766.25 22639.71 01:02:50.464 { 01:02:50.464 "results": [ 01:02:50.464 { 01:02:50.464 "job": "TLSTESTn1", 01:02:50.464 "core_mask": "0x4", 01:02:50.464 "workload": "verify", 01:02:50.464 "status": "finished", 01:02:50.464 "verify_range": { 01:02:50.464 "start": 0, 01:02:50.464 "length": 8192 01:02:50.464 }, 01:02:50.464 "queue_depth": 128, 01:02:50.464 "io_size": 4096, 01:02:50.464 "runtime": 10.014065, 01:02:50.464 "iops": 4960.22344572359, 01:02:50.464 "mibps": 19.375872834857773, 01:02:50.464 "io_failed": 0, 01:02:50.464 "io_timeout": 0, 01:02:50.464 "avg_latency_us": 25761.903517181803, 01:02:50.464 "min_latency_us": 4766.254545454545, 01:02:50.464 "max_latency_us": 22639.70909090909 01:02:50.464 } 01:02:50.464 ], 01:02:50.464 "core_count": 1 01:02:50.464 } 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83275 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83275 ']' 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83275 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83275 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:02:50.464 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:02:50.465 killing process with pid 83275 01:02:50.465 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83275' 01:02:50.465 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83275 01:02:50.465 Received shutdown signal, test time was about 10.000000 seconds 01:02:50.465 01:02:50.465 Latency(us) 01:02:50.465 [2024-12-09T06:01:45.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:50.465 [2024-12-09T06:01:45.051Z] =================================================================================================================== 01:02:50.465 [2024-12-09T06:01:45.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:50.465 06:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83275 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83231 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83231 ']' 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83231 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83231 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:50.465 killing process with pid 83231 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83231' 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83231 01:02:50.465 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83231 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83420 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83420 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83420 ']' 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:50.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:50.723 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:50.723 [2024-12-09 06:01:45.214035] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:50.723 [2024-12-09 06:01:45.214114] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:50.982 [2024-12-09 06:01:45.360332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:50.982 [2024-12-09 06:01:45.397304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:50.982 [2024-12-09 06:01:45.397375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:50.982 [2024-12-09 06:01:45.397390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:50.982 [2024-12-09 06:01:45.397401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:50.982 [2024-12-09 06:01:45.397411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:50.982 [2024-12-09 06:01:45.397787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.2oyLjadhV8 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2oyLjadhV8 01:02:50.982 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:02:51.239 [2024-12-09 06:01:45.732458] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:51.240 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 01:02:51.498 06:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 01:02:51.757 [2024-12-09 06:01:46.176537] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:02:51.757 [2024-12-09 06:01:46.176767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:51.757 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 01:02:52.015 malloc0 01:02:52.015 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:02:52.273 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:52.533 06:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=83516 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 83516 /var/tmp/bdevperf.sock 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83516 ']' 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:52.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:52.792 06:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:52.792 [2024-12-09 06:01:47.218404] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:52.792 [2024-12-09 06:01:47.218503] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83516 ] 01:02:52.792 [2024-12-09 06:01:47.354236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:53.051 [2024-12-09 06:01:47.384642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:53.619 06:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:53.619 06:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:53.619 06:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:53.878 06:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:02:54.137 [2024-12-09 06:01:48.699854] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:54.397 nvme0n1 01:02:54.397 06:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:02:54.397 Running I/O for 1 seconds... 01:02:55.594 4864.00 IOPS, 19.00 MiB/s 01:02:55.594 Latency(us) 01:02:55.594 [2024-12-09T06:01:50.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:55.594 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:02:55.594 Verification LBA range: start 0x0 length 0x2000 01:02:55.594 nvme0n1 : 1.02 4873.66 19.04 0.00 0.00 26022.63 6047.19 17039.36 01:02:55.594 [2024-12-09T06:01:50.180Z] =================================================================================================================== 01:02:55.594 [2024-12-09T06:01:50.180Z] Total : 4873.66 19.04 0.00 0.00 26022.63 6047.19 17039.36 01:02:55.594 { 01:02:55.594 "results": [ 01:02:55.594 { 01:02:55.594 "job": "nvme0n1", 01:02:55.594 "core_mask": "0x2", 01:02:55.594 "workload": "verify", 01:02:55.594 "status": "finished", 01:02:55.594 "verify_range": { 01:02:55.594 "start": 0, 01:02:55.594 "length": 8192 01:02:55.594 }, 01:02:55.594 "queue_depth": 128, 01:02:55.594 "io_size": 4096, 01:02:55.594 "runtime": 1.024282, 01:02:55.594 "iops": 4873.657840321318, 01:02:55.594 "mibps": 19.03772593875515, 01:02:55.594 "io_failed": 0, 01:02:55.594 "io_timeout": 0, 01:02:55.594 "avg_latency_us": 26022.62675990676, 01:02:55.594 "min_latency_us": 6047.185454545454, 01:02:55.594 "max_latency_us": 17039.36 01:02:55.594 } 01:02:55.594 ], 01:02:55.594 "core_count": 1 01:02:55.594 } 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 83516 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83516 ']' 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83516 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83516 01:02:55.594 killing process with pid 83516 01:02:55.594 Received shutdown signal, test time was about 1.000000 seconds 01:02:55.594 01:02:55.594 Latency(us) 01:02:55.594 [2024-12-09T06:01:50.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:55.594 [2024-12-09T06:01:50.180Z] =================================================================================================================== 01:02:55.594 [2024-12-09T06:01:50.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83516' 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83516 01:02:55.594 06:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83516 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 83420 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83420 ']' 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83420 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83420 01:02:55.594 killing process with pid 83420 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83420' 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83420 01:02:55.594 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83420 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83586 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83586 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83586 ']' 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:55.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:55.853 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:55.853 [2024-12-09 06:01:50.338734] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:55.853 [2024-12-09 06:01:50.338832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:02:56.112 [2024-12-09 06:01:50.482173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:56.112 [2024-12-09 06:01:50.511970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:02:56.112 [2024-12-09 06:01:50.512305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:02:56.112 [2024-12-09 06:01:50.512377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:02:56.112 [2024-12-09 06:01:50.512442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:02:56.112 [2024-12-09 06:01:50.512504] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:02:56.112 [2024-12-09 06:01:50.512873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:56.112 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:56.112 [2024-12-09 06:01:50.642774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:56.112 malloc0 01:02:56.113 [2024-12-09 06:01:50.668512] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:02:56.113 [2024-12-09 06:01:50.668904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=83622 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 83622 /var/tmp/bdevperf.sock 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83622 ']' 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:56.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:56.372 06:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:56.372 [2024-12-09 06:01:50.745582] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:56.372 [2024-12-09 06:01:50.746084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83622 ] 01:02:56.372 [2024-12-09 06:01:50.883678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:56.372 [2024-12-09 06:01:50.912055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:02:57.311 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:02:57.311 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:02:57.311 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2oyLjadhV8 01:02:57.580 06:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 01:02:57.580 [2024-12-09 06:01:52.159718] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:57.840 nvme0n1 01:02:57.840 06:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:02:57.840 Running I/O for 1 seconds... 01:02:59.216 4864.00 IOPS, 19.00 MiB/s 01:02:59.216 Latency(us) 01:02:59.216 [2024-12-09T06:01:53.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:59.216 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:02:59.216 Verification LBA range: start 0x0 length 0x2000 01:02:59.216 nvme0n1 : 1.02 4900.47 19.14 0.00 0.00 25882.19 6047.19 17635.14 01:02:59.216 [2024-12-09T06:01:53.802Z] =================================================================================================================== 01:02:59.216 [2024-12-09T06:01:53.802Z] Total : 4900.47 19.14 0.00 0.00 25882.19 6047.19 17635.14 01:02:59.216 { 01:02:59.216 "results": [ 01:02:59.216 { 01:02:59.216 "job": "nvme0n1", 01:02:59.216 "core_mask": "0x2", 01:02:59.216 "workload": "verify", 01:02:59.216 "status": "finished", 01:02:59.216 "verify_range": { 01:02:59.216 "start": 0, 01:02:59.216 "length": 8192 01:02:59.216 }, 01:02:59.216 "queue_depth": 128, 01:02:59.216 "io_size": 4096, 01:02:59.216 "runtime": 1.018678, 01:02:59.216 "iops": 4900.4690392842485, 01:02:59.216 "mibps": 19.142457184704096, 01:02:59.216 "io_failed": 0, 01:02:59.216 "io_timeout": 0, 01:02:59.216 "avg_latency_us": 25882.187785547783, 01:02:59.217 "min_latency_us": 6047.185454545454, 01:02:59.217 "max_latency_us": 17635.14181818182 01:02:59.217 } 01:02:59.217 ], 01:02:59.217 "core_count": 1 01:02:59.217 } 01:02:59.217 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 01:02:59.217 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 01:02:59.217 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:59.217 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:02:59.217 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 01:02:59.217 "subsystems": [ 01:02:59.217 { 01:02:59.217 "subsystem": "keyring", 01:02:59.217 "config": [ 01:02:59.217 { 01:02:59.217 "method": "keyring_file_add_key", 01:02:59.217 "params": { 01:02:59.217 "name": "key0", 01:02:59.217 "path": "/tmp/tmp.2oyLjadhV8" 01:02:59.217 } 01:02:59.217 } 01:02:59.217 ] 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "subsystem": "iobuf", 01:02:59.217 "config": [ 01:02:59.217 { 01:02:59.217 "method": "iobuf_set_options", 01:02:59.217 "params": { 01:02:59.217 "enable_numa": false, 01:02:59.217 "large_bufsize": 135168, 01:02:59.217 "large_pool_count": 1024, 01:02:59.217 "small_bufsize": 8192, 01:02:59.217 "small_pool_count": 8192 01:02:59.217 } 01:02:59.217 } 01:02:59.217 ] 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "subsystem": "sock", 01:02:59.217 "config": [ 01:02:59.217 { 01:02:59.217 "method": "sock_set_default_impl", 01:02:59.217 "params": { 01:02:59.217 "impl_name": "posix" 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "sock_impl_set_options", 01:02:59.217 "params": { 01:02:59.217 "enable_ktls": false, 01:02:59.217 "enable_placement_id": 0, 01:02:59.217 "enable_quickack": false, 01:02:59.217 "enable_recv_pipe": true, 01:02:59.217 "enable_zerocopy_send_client": false, 01:02:59.217 "enable_zerocopy_send_server": true, 01:02:59.217 "impl_name": "ssl", 01:02:59.217 "recv_buf_size": 4096, 01:02:59.217 "send_buf_size": 4096, 01:02:59.217 "tls_version": 0, 01:02:59.217 "zerocopy_threshold": 0 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "sock_impl_set_options", 01:02:59.217 "params": { 01:02:59.217 "enable_ktls": false, 01:02:59.217 "enable_placement_id": 0, 01:02:59.217 "enable_quickack": false, 01:02:59.217 "enable_recv_pipe": true, 01:02:59.217 "enable_zerocopy_send_client": false, 01:02:59.217 "enable_zerocopy_send_server": true, 01:02:59.217 "impl_name": "posix", 01:02:59.217 "recv_buf_size": 2097152, 01:02:59.217 "send_buf_size": 2097152, 01:02:59.217 "tls_version": 0, 01:02:59.217 "zerocopy_threshold": 0 01:02:59.217 } 01:02:59.217 } 01:02:59.217 ] 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "subsystem": "vmd", 01:02:59.217 "config": [] 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "subsystem": "accel", 01:02:59.217 "config": [ 01:02:59.217 { 01:02:59.217 "method": "accel_set_options", 01:02:59.217 "params": { 01:02:59.217 "buf_count": 2048, 01:02:59.217 "large_cache_size": 16, 01:02:59.217 "sequence_count": 2048, 01:02:59.217 "small_cache_size": 128, 01:02:59.217 "task_count": 2048 01:02:59.217 } 01:02:59.217 } 01:02:59.217 ] 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "subsystem": "bdev", 01:02:59.217 "config": [ 01:02:59.217 { 01:02:59.217 "method": "bdev_set_options", 01:02:59.217 "params": { 01:02:59.217 "bdev_auto_examine": true, 01:02:59.217 "bdev_io_cache_size": 256, 01:02:59.217 "bdev_io_pool_size": 65535, 01:02:59.217 "iobuf_large_cache_size": 16, 01:02:59.217 "iobuf_small_cache_size": 128 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "bdev_raid_set_options", 01:02:59.217 "params": { 01:02:59.217 "process_max_bandwidth_mb_sec": 0, 01:02:59.217 "process_window_size_kb": 1024 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "bdev_iscsi_set_options", 01:02:59.217 "params": { 01:02:59.217 "timeout_sec": 30 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "bdev_nvme_set_options", 01:02:59.217 "params": { 01:02:59.217 "action_on_timeout": "none", 01:02:59.217 "allow_accel_sequence": false, 01:02:59.217 "arbitration_burst": 0, 01:02:59.217 "bdev_retry_count": 3, 01:02:59.217 "ctrlr_loss_timeout_sec": 0, 01:02:59.217 "delay_cmd_submit": true, 01:02:59.217 "dhchap_dhgroups": [ 01:02:59.217 "null", 01:02:59.217 "ffdhe2048", 01:02:59.217 "ffdhe3072", 01:02:59.217 "ffdhe4096", 01:02:59.217 "ffdhe6144", 01:02:59.217 "ffdhe8192" 01:02:59.217 ], 01:02:59.217 "dhchap_digests": [ 01:02:59.217 "sha256", 01:02:59.217 "sha384", 01:02:59.217 "sha512" 01:02:59.217 ], 01:02:59.217 "disable_auto_failback": false, 01:02:59.217 "fast_io_fail_timeout_sec": 0, 01:02:59.217 "generate_uuids": false, 01:02:59.217 "high_priority_weight": 0, 01:02:59.217 "io_path_stat": false, 01:02:59.217 "io_queue_requests": 0, 01:02:59.217 "keep_alive_timeout_ms": 10000, 01:02:59.217 "low_priority_weight": 0, 01:02:59.217 "medium_priority_weight": 0, 01:02:59.217 "nvme_adminq_poll_period_us": 10000, 01:02:59.217 "nvme_error_stat": false, 01:02:59.217 "nvme_ioq_poll_period_us": 0, 01:02:59.217 "rdma_cm_event_timeout_ms": 0, 01:02:59.217 "rdma_max_cq_size": 0, 01:02:59.217 "rdma_srq_size": 0, 01:02:59.217 "reconnect_delay_sec": 0, 01:02:59.217 "timeout_admin_us": 0, 01:02:59.217 "timeout_us": 0, 01:02:59.217 "transport_ack_timeout": 0, 01:02:59.217 "transport_retry_count": 4, 01:02:59.217 "transport_tos": 0 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "bdev_nvme_set_hotplug", 01:02:59.217 "params": { 01:02:59.217 "enable": false, 01:02:59.217 "period_us": 100000 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "bdev_malloc_create", 01:02:59.217 "params": { 01:02:59.217 "block_size": 4096, 01:02:59.217 "dif_is_head_of_md": false, 01:02:59.217 "dif_pi_format": 0, 01:02:59.217 "dif_type": 0, 01:02:59.217 "md_size": 0, 01:02:59.217 "name": "malloc0", 01:02:59.217 "num_blocks": 8192, 01:02:59.217 "optimal_io_boundary": 0, 01:02:59.217 "physical_block_size": 4096, 01:02:59.217 "uuid": "0a99befa-e1a2-458e-815e-457e790c9f26" 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "bdev_wait_for_examine" 01:02:59.217 } 01:02:59.217 ] 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "subsystem": "nbd", 01:02:59.217 "config": [] 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "subsystem": "scheduler", 01:02:59.217 "config": [ 01:02:59.217 { 01:02:59.217 "method": "framework_set_scheduler", 01:02:59.217 "params": { 01:02:59.217 "name": "static" 01:02:59.217 } 01:02:59.217 } 01:02:59.217 ] 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "subsystem": "nvmf", 01:02:59.217 "config": [ 01:02:59.217 { 01:02:59.217 "method": "nvmf_set_config", 01:02:59.217 "params": { 01:02:59.217 "admin_cmd_passthru": { 01:02:59.217 "identify_ctrlr": false 01:02:59.217 }, 01:02:59.217 "dhchap_dhgroups": [ 01:02:59.217 "null", 01:02:59.217 "ffdhe2048", 01:02:59.217 "ffdhe3072", 01:02:59.217 "ffdhe4096", 01:02:59.217 "ffdhe6144", 01:02:59.217 "ffdhe8192" 01:02:59.217 ], 01:02:59.217 "dhchap_digests": [ 01:02:59.217 "sha256", 01:02:59.217 "sha384", 01:02:59.217 "sha512" 01:02:59.217 ], 01:02:59.217 "discovery_filter": "match_any" 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "nvmf_set_max_subsystems", 01:02:59.217 "params": { 01:02:59.217 "max_subsystems": 1024 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "nvmf_set_crdt", 01:02:59.217 "params": { 01:02:59.217 "crdt1": 0, 01:02:59.217 "crdt2": 0, 01:02:59.217 "crdt3": 0 01:02:59.217 } 01:02:59.217 }, 01:02:59.217 { 01:02:59.217 "method": "nvmf_create_transport", 01:02:59.217 "params": { 01:02:59.217 "abort_timeout_sec": 1, 01:02:59.217 "ack_timeout": 0, 01:02:59.217 "buf_cache_size": 4294967295, 01:02:59.217 "c2h_success": false, 01:02:59.217 "data_wr_pool_size": 0, 01:02:59.217 "dif_insert_or_strip": false, 01:02:59.217 "in_capsule_data_size": 4096, 01:02:59.217 "io_unit_size": 131072, 01:02:59.217 "max_aq_depth": 128, 01:02:59.217 "max_io_qpairs_per_ctrlr": 127, 01:02:59.217 "max_io_size": 131072, 01:02:59.217 "max_queue_depth": 128, 01:02:59.217 "num_shared_buffers": 511, 01:02:59.218 "sock_priority": 0, 01:02:59.218 "trtype": "TCP", 01:02:59.218 "zcopy": false 01:02:59.218 } 01:02:59.218 }, 01:02:59.218 { 01:02:59.218 "method": "nvmf_create_subsystem", 01:02:59.218 "params": { 01:02:59.218 "allow_any_host": false, 01:02:59.218 "ana_reporting": false, 01:02:59.218 "max_cntlid": 65519, 01:02:59.218 "max_namespaces": 32, 01:02:59.218 "min_cntlid": 1, 01:02:59.218 "model_number": "SPDK bdev Controller", 01:02:59.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:59.218 "serial_number": "00000000000000000000" 01:02:59.218 } 01:02:59.218 }, 01:02:59.218 { 01:02:59.218 "method": "nvmf_subsystem_add_host", 01:02:59.218 "params": { 01:02:59.218 "host": "nqn.2016-06.io.spdk:host1", 01:02:59.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:59.218 "psk": "key0" 01:02:59.218 } 01:02:59.218 }, 01:02:59.218 { 01:02:59.218 "method": "nvmf_subsystem_add_ns", 01:02:59.218 "params": { 01:02:59.218 "namespace": { 01:02:59.218 "bdev_name": "malloc0", 01:02:59.218 "nguid": "0A99BEFAE1A2458E815E457E790C9F26", 01:02:59.218 "no_auto_visible": false, 01:02:59.218 "nsid": 1, 01:02:59.218 "uuid": "0a99befa-e1a2-458e-815e-457e790c9f26" 01:02:59.218 }, 01:02:59.218 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:02:59.218 } 01:02:59.218 }, 01:02:59.218 { 01:02:59.218 "method": "nvmf_subsystem_add_listener", 01:02:59.218 "params": { 01:02:59.218 "listen_address": { 01:02:59.218 "adrfam": "IPv4", 01:02:59.218 "traddr": "10.0.0.3", 01:02:59.218 "trsvcid": "4420", 01:02:59.218 "trtype": "TCP" 01:02:59.218 }, 01:02:59.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:59.218 "secure_channel": false, 01:02:59.218 "sock_impl": "ssl" 01:02:59.218 } 01:02:59.218 } 01:02:59.218 ] 01:02:59.218 } 01:02:59.218 ] 01:02:59.218 }' 01:02:59.218 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 01:02:59.477 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 01:02:59.477 "subsystems": [ 01:02:59.477 { 01:02:59.477 "subsystem": "keyring", 01:02:59.477 "config": [ 01:02:59.477 { 01:02:59.477 "method": "keyring_file_add_key", 01:02:59.477 "params": { 01:02:59.477 "name": "key0", 01:02:59.477 "path": "/tmp/tmp.2oyLjadhV8" 01:02:59.477 } 01:02:59.477 } 01:02:59.477 ] 01:02:59.477 }, 01:02:59.477 { 01:02:59.477 "subsystem": "iobuf", 01:02:59.477 "config": [ 01:02:59.477 { 01:02:59.477 "method": "iobuf_set_options", 01:02:59.477 "params": { 01:02:59.477 "enable_numa": false, 01:02:59.478 "large_bufsize": 135168, 01:02:59.478 "large_pool_count": 1024, 01:02:59.478 "small_bufsize": 8192, 01:02:59.478 "small_pool_count": 8192 01:02:59.478 } 01:02:59.478 } 01:02:59.478 ] 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "subsystem": "sock", 01:02:59.478 "config": [ 01:02:59.478 { 01:02:59.478 "method": "sock_set_default_impl", 01:02:59.478 "params": { 01:02:59.478 "impl_name": "posix" 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "sock_impl_set_options", 01:02:59.478 "params": { 01:02:59.478 "enable_ktls": false, 01:02:59.478 "enable_placement_id": 0, 01:02:59.478 "enable_quickack": false, 01:02:59.478 "enable_recv_pipe": true, 01:02:59.478 "enable_zerocopy_send_client": false, 01:02:59.478 "enable_zerocopy_send_server": true, 01:02:59.478 "impl_name": "ssl", 01:02:59.478 "recv_buf_size": 4096, 01:02:59.478 "send_buf_size": 4096, 01:02:59.478 "tls_version": 0, 01:02:59.478 "zerocopy_threshold": 0 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "sock_impl_set_options", 01:02:59.478 "params": { 01:02:59.478 "enable_ktls": false, 01:02:59.478 "enable_placement_id": 0, 01:02:59.478 "enable_quickack": false, 01:02:59.478 "enable_recv_pipe": true, 01:02:59.478 "enable_zerocopy_send_client": false, 01:02:59.478 "enable_zerocopy_send_server": true, 01:02:59.478 "impl_name": "posix", 01:02:59.478 "recv_buf_size": 2097152, 01:02:59.478 "send_buf_size": 2097152, 01:02:59.478 "tls_version": 0, 01:02:59.478 "zerocopy_threshold": 0 01:02:59.478 } 01:02:59.478 } 01:02:59.478 ] 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "subsystem": "vmd", 01:02:59.478 "config": [] 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "subsystem": "accel", 01:02:59.478 "config": [ 01:02:59.478 { 01:02:59.478 "method": "accel_set_options", 01:02:59.478 "params": { 01:02:59.478 "buf_count": 2048, 01:02:59.478 "large_cache_size": 16, 01:02:59.478 "sequence_count": 2048, 01:02:59.478 "small_cache_size": 128, 01:02:59.478 "task_count": 2048 01:02:59.478 } 01:02:59.478 } 01:02:59.478 ] 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "subsystem": "bdev", 01:02:59.478 "config": [ 01:02:59.478 { 01:02:59.478 "method": "bdev_set_options", 01:02:59.478 "params": { 01:02:59.478 "bdev_auto_examine": true, 01:02:59.478 "bdev_io_cache_size": 256, 01:02:59.478 "bdev_io_pool_size": 65535, 01:02:59.478 "iobuf_large_cache_size": 16, 01:02:59.478 "iobuf_small_cache_size": 128 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "bdev_raid_set_options", 01:02:59.478 "params": { 01:02:59.478 "process_max_bandwidth_mb_sec": 0, 01:02:59.478 "process_window_size_kb": 1024 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "bdev_iscsi_set_options", 01:02:59.478 "params": { 01:02:59.478 "timeout_sec": 30 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "bdev_nvme_set_options", 01:02:59.478 "params": { 01:02:59.478 "action_on_timeout": "none", 01:02:59.478 "allow_accel_sequence": false, 01:02:59.478 "arbitration_burst": 0, 01:02:59.478 "bdev_retry_count": 3, 01:02:59.478 "ctrlr_loss_timeout_sec": 0, 01:02:59.478 "delay_cmd_submit": true, 01:02:59.478 "dhchap_dhgroups": [ 01:02:59.478 "null", 01:02:59.478 "ffdhe2048", 01:02:59.478 "ffdhe3072", 01:02:59.478 "ffdhe4096", 01:02:59.478 "ffdhe6144", 01:02:59.478 "ffdhe8192" 01:02:59.478 ], 01:02:59.478 "dhchap_digests": [ 01:02:59.478 "sha256", 01:02:59.478 "sha384", 01:02:59.478 "sha512" 01:02:59.478 ], 01:02:59.478 "disable_auto_failback": false, 01:02:59.478 "fast_io_fail_timeout_sec": 0, 01:02:59.478 "generate_uuids": false, 01:02:59.478 "high_priority_weight": 0, 01:02:59.478 "io_path_stat": false, 01:02:59.478 "io_queue_requests": 512, 01:02:59.478 "keep_alive_timeout_ms": 10000, 01:02:59.478 "low_priority_weight": 0, 01:02:59.478 "medium_priority_weight": 0, 01:02:59.478 "nvme_adminq_poll_period_us": 10000, 01:02:59.478 "nvme_error_stat": false, 01:02:59.478 "nvme_ioq_poll_period_us": 0, 01:02:59.478 "rdma_cm_event_timeout_ms": 0, 01:02:59.478 "rdma_max_cq_size": 0, 01:02:59.478 "rdma_srq_size": 0, 01:02:59.478 "reconnect_delay_sec": 0, 01:02:59.478 "timeout_admin_us": 0, 01:02:59.478 "timeout_us": 0, 01:02:59.478 "transport_ack_timeout": 0, 01:02:59.478 "transport_retry_count": 4, 01:02:59.478 "transport_tos": 0 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "bdev_nvme_attach_controller", 01:02:59.478 "params": { 01:02:59.478 "adrfam": "IPv4", 01:02:59.478 "ctrlr_loss_timeout_sec": 0, 01:02:59.478 "ddgst": false, 01:02:59.478 "fast_io_fail_timeout_sec": 0, 01:02:59.478 "hdgst": false, 01:02:59.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:02:59.478 "multipath": "multipath", 01:02:59.478 "name": "nvme0", 01:02:59.478 "prchk_guard": false, 01:02:59.478 "prchk_reftag": false, 01:02:59.478 "psk": "key0", 01:02:59.478 "reconnect_delay_sec": 0, 01:02:59.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:02:59.478 "traddr": "10.0.0.3", 01:02:59.478 "trsvcid": "4420", 01:02:59.478 "trtype": "TCP" 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "bdev_nvme_set_hotplug", 01:02:59.478 "params": { 01:02:59.478 "enable": false, 01:02:59.478 "period_us": 100000 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "bdev_enable_histogram", 01:02:59.478 "params": { 01:02:59.478 "enable": true, 01:02:59.478 "name": "nvme0n1" 01:02:59.478 } 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "method": "bdev_wait_for_examine" 01:02:59.478 } 01:02:59.478 ] 01:02:59.478 }, 01:02:59.478 { 01:02:59.478 "subsystem": "nbd", 01:02:59.478 "config": [] 01:02:59.478 } 01:02:59.478 ] 01:02:59.478 }' 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 83622 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83622 ']' 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83622 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83622 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:02:59.478 killing process with pid 83622 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83622' 01:02:59.478 Received shutdown signal, test time was about 1.000000 seconds 01:02:59.478 01:02:59.478 Latency(us) 01:02:59.478 [2024-12-09T06:01:54.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:59.478 [2024-12-09T06:01:54.064Z] =================================================================================================================== 01:02:59.478 [2024-12-09T06:01:54.064Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83622 01:02:59.478 06:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83622 01:02:59.478 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 83586 01:02:59.478 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83586 ']' 01:02:59.478 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83586 01:02:59.479 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:02:59.479 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:02:59.479 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83586 01:02:59.738 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:02:59.738 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:02:59.738 killing process with pid 83586 01:02:59.738 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83586' 01:02:59.738 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83586 01:02:59.738 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83586 01:02:59.738 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 01:02:59.738 "subsystems": [ 01:02:59.738 { 01:02:59.738 "subsystem": "keyring", 01:02:59.738 "config": [ 01:02:59.738 { 01:02:59.738 "method": "keyring_file_add_key", 01:02:59.738 "params": { 01:02:59.738 "name": "key0", 01:02:59.738 "path": "/tmp/tmp.2oyLjadhV8" 01:02:59.738 } 01:02:59.738 } 01:02:59.738 ] 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "subsystem": "iobuf", 01:02:59.738 "config": [ 01:02:59.738 { 01:02:59.738 "method": "iobuf_set_options", 01:02:59.738 "params": { 01:02:59.738 "enable_numa": false, 01:02:59.738 "large_bufsize": 135168, 01:02:59.738 "large_pool_count": 1024, 01:02:59.738 "small_bufsize": 8192, 01:02:59.738 "small_pool_count": 8192 01:02:59.738 } 01:02:59.738 } 01:02:59.738 ] 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "subsystem": "sock", 01:02:59.738 "config": [ 01:02:59.738 { 01:02:59.738 "method": "sock_set_default_impl", 01:02:59.738 "params": { 01:02:59.738 "impl_name": "posix" 01:02:59.738 } 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "method": "sock_impl_set_options", 01:02:59.738 "params": { 01:02:59.738 "enable_ktls": false, 01:02:59.738 "enable_placement_id": 0, 01:02:59.738 "enable_quickack": false, 01:02:59.738 "enable_recv_pipe": true, 01:02:59.738 "enable_zerocopy_send_client": false, 01:02:59.738 "enable_zerocopy_send_server": true, 01:02:59.738 "impl_name": "ssl", 01:02:59.738 "recv_buf_size": 4096, 01:02:59.738 "send_buf_size": 4096, 01:02:59.738 "tls_version": 0, 01:02:59.738 "zerocopy_threshold": 0 01:02:59.738 } 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "method": "sock_impl_set_options", 01:02:59.738 "params": { 01:02:59.738 "enable_ktls": false, 01:02:59.738 "enable_placement_id": 0, 01:02:59.738 "enable_quickack": false, 01:02:59.738 "enable_recv_pipe": true, 01:02:59.738 "enable_zerocopy_send_client": false, 01:02:59.738 "enable_zerocopy_send_server": true, 01:02:59.738 "impl_name": "posix", 01:02:59.738 "recv_buf_size": 2097152, 01:02:59.738 "send_buf_size": 2097152, 01:02:59.738 "tls_version": 0, 01:02:59.738 "zerocopy_threshold": 0 01:02:59.738 } 01:02:59.738 } 01:02:59.738 ] 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "subsystem": "vmd", 01:02:59.738 "config": [] 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "subsystem": "accel", 01:02:59.738 "config": [ 01:02:59.738 { 01:02:59.738 "method": "accel_set_options", 01:02:59.738 "params": { 01:02:59.738 "buf_count": 2048, 01:02:59.738 "large_cache_size": 16, 01:02:59.738 "sequence_count": 2048, 01:02:59.738 "small_cache_size": 128, 01:02:59.738 "task_count": 2048 01:02:59.738 } 01:02:59.738 } 01:02:59.738 ] 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "subsystem": "bdev", 01:02:59.738 "config": [ 01:02:59.738 { 01:02:59.738 "method": "bdev_set_options", 01:02:59.738 "params": { 01:02:59.738 "bdev_auto_examine": true, 01:02:59.738 "bdev_io_cache_size": 256, 01:02:59.738 "bdev_io_pool_size": 65535, 01:02:59.738 "iobuf_large_cache_size": 16, 01:02:59.738 "iobuf_small_cache_size": 128 01:02:59.738 } 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "method": "bdev_raid_set_options", 01:02:59.738 "params": { 01:02:59.738 "process_max_bandwidth_mb_sec": 0, 01:02:59.738 "process_window_size_kb": 1024 01:02:59.738 } 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "method": "bdev_iscsi_set_options", 01:02:59.738 "params": { 01:02:59.738 "timeout_sec": 30 01:02:59.738 } 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "method": "bdev_nvme_set_options", 01:02:59.738 "params": { 01:02:59.738 "action_on_timeout": "none", 01:02:59.738 "allow_accel_sequence": false, 01:02:59.738 "arbitration_burst": 0, 01:02:59.738 "bdev_retry_count": 3, 01:02:59.738 "ctrlr_loss_timeout_sec": 0, 01:02:59.738 "delay_cmd_submit": true, 01:02:59.738 "dhchap_dhgroups": [ 01:02:59.738 "null", 01:02:59.738 "ffdhe2048", 01:02:59.738 "ffdhe3072", 01:02:59.738 "ffdhe4096", 01:02:59.738 "ffdhe6144", 01:02:59.738 "ffdhe8192" 01:02:59.738 ], 01:02:59.738 "dhchap_digests": [ 01:02:59.738 "sha256", 01:02:59.738 "sha384", 01:02:59.738 "sha512" 01:02:59.738 ], 01:02:59.738 "disable_auto_failback": false, 01:02:59.738 "fast_io_fail_timeout_sec": 0, 01:02:59.738 "generate_uuids": false, 01:02:59.738 "high_priority_weight": 0, 01:02:59.738 "io_path_stat": false, 01:02:59.738 "io_queue_requests": 0, 01:02:59.738 "keep_alive_timeout_ms": 10000, 01:02:59.738 "low_priority_weight": 0, 01:02:59.738 "medium_priority_weight": 0, 01:02:59.738 "nvme_adminq_poll_period_us": 10000, 01:02:59.738 "nvme_error_stat": false, 01:02:59.738 "nvme_ioq_poll_period_us": 0, 01:02:59.738 "rdma_cm_event_timeout_ms": 0, 01:02:59.738 "rdma_max_cq_size": 0, 01:02:59.738 "rdma_srq_size": 0, 01:02:59.738 "reconnect_delay_sec": 0, 01:02:59.738 "timeout_admin_us": 0, 01:02:59.738 "timeout_us": 0, 01:02:59.738 "transport_ack_timeout": 0, 01:02:59.738 "transport_retry_count": 4, 01:02:59.738 "transport_tos": 0 01:02:59.738 } 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "method": "bdev_nvme_set_hotplug", 01:02:59.738 "params": { 01:02:59.738 "enable": false, 01:02:59.738 "period_us": 100000 01:02:59.738 } 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "method": "bdev_malloc_create", 01:02:59.738 "params": { 01:02:59.738 "block_size": 4096, 01:02:59.738 "dif_is_head_of_md": false, 01:02:59.738 "dif_pi_format": 0, 01:02:59.738 "dif_type": 0, 01:02:59.738 "md_size": 0, 01:02:59.738 "name": "malloc0", 01:02:59.738 "num_blocks": 8192, 01:02:59.738 "optimal_io_boundary": 0, 01:02:59.738 "physical_block_size": 4096, 01:02:59.738 "uuid": "0a99befa-e1a2-458e-815e-457e790c9f26" 01:02:59.738 } 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "method": "bdev_wait_for_examine" 01:02:59.738 } 01:02:59.738 ] 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "subsystem": "nbd", 01:02:59.738 "config": [] 01:02:59.738 }, 01:02:59.738 { 01:02:59.738 "subsystem": "scheduler", 01:02:59.738 "config": [ 01:02:59.738 { 01:02:59.738 "method": "framework_set_scheduler", 01:02:59.738 "params": { 01:02:59.738 "name": "static" 01:02:59.738 } 01:02:59.738 } 01:02:59.738 ] 01:02:59.739 }, 01:02:59.739 { 01:02:59.739 "subsystem": "nvmf", 01:02:59.739 "config": [ 01:02:59.739 { 01:02:59.739 "method": "nvmf_set_config", 01:02:59.739 "params": { 01:02:59.739 "admin_cmd_passthru": { 01:02:59.739 "identify_ctrlr": false 01:02:59.739 }, 01:02:59.739 "dhchap_dhgroups": [ 01:02:59.739 "null", 01:02:59.739 "ffdhe2048", 01:02:59.739 "ffdhe3072", 01:02:59.739 "ffdhe4096", 01:02:59.739 "ffdhe6144", 01:02:59.739 "ffdhe8192" 01:02:59.739 ], 01:02:59.739 "dhchap_digests": [ 01:02:59.739 "sha256", 01:02:59.739 "sha384", 01:02:59.739 "sha512" 01:02:59.739 ], 01:02:59.739 "discovery_filter": "match_any" 01:02:59.739 } 01:02:59.739 }, 01:02:59.739 { 01:02:59.739 "method": "nvmf_set_max_subsystems", 01:02:59.739 "params": { 01:02:59.739 "max_subsystems": 1024 01:02:59.739 } 01:02:59.739 }, 01:02:59.739 { 01:02:59.739 "method": "nvmf_set_crdt", 01:02:59.739 "params": { 01:02:59.739 "crdt1": 0, 01:02:59.739 "crdt2": 0, 01:02:59.739 "crdt3": 0 01:02:59.739 } 01:02:59.739 }, 01:02:59.739 { 01:02:59.739 "method": "nvmf_create_transport", 01:02:59.739 "params": { 01:02:59.739 "abort_timeout_sec": 1, 01:02:59.739 "ack_timeout": 0, 01:02:59.739 "buf_cache_size": 4294967295, 01:02:59.739 "c2h_success": false, 01:02:59.739 "data_wr_pool_size": 0, 01:02:59.739 "dif_insert_or_strip": false, 01:02:59.739 "in_capsule_data_size": 4096, 01:02:59.739 "io_unit_size": 131072, 01:02:59.739 "max_aq_depth": 128, 01:02:59.739 "max_io_qpairs_per_ctrlr": 127, 01:02:59.739 "max_io_size": 131072, 01:02:59.739 "max_queue_depth": 128, 01:02:59.739 "num_shared_buffers": 511, 01:02:59.739 "sock_priority": 0, 01:02:59.739 "trtype": "TCP", 01:02:59.739 "zcopy": false 01:02:59.739 } 01:02:59.739 }, 01:02:59.739 { 01:02:59.739 "method": "nvmf_create_subsystem", 01:02:59.739 "params": { 01:02:59.739 "allow_any_host": false, 01:02:59.739 "ana_reporting": false, 01:02:59.739 "max_cntlid": 65519, 01:02:59.739 "max_namespaces": 32, 01:02:59.739 "min_cntlid": 1, 01:02:59.739 "model_number": "SPDK bdev Controller", 01:02:59.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:59.739 "serial_number": "00000000000000000000" 01:02:59.739 } 01:02:59.739 }, 01:02:59.739 { 01:02:59.739 "method": "nvmf_subsystem_add_host", 01:02:59.739 "params": { 01:02:59.739 "host": "nqn.2016-06.io.spdk:host1", 01:02:59.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:59.739 "psk": "key0" 01:02:59.739 } 01:02:59.739 }, 01:02:59.739 { 01:02:59.739 "method": "nvmf_subsystem_add_ns", 01:02:59.739 "params": { 01:02:59.739 "namespace": { 01:02:59.739 "bdev_name": "malloc0", 01:02:59.739 "nguid": "0A99BEFAE1A2458E815E457E790C9F26", 01:02:59.739 "no_auto_visible": false, 01:02:59.739 "nsid": 1, 01:02:59.739 "uuid": "0a99befa-e1a2-458e-815e-457e790c9f26" 01:02:59.739 }, 01:02:59.739 "nqn": "nqn.2016-06.io.spdk:cnode1" 01:02:59.739 } 01:02:59.739 }, 01:02:59.739 { 01:02:59.739 "method": "nvmf_subsystem_add_listener", 01:02:59.739 "params": { 01:02:59.739 "listen_address": { 01:02:59.739 "adrfam": "IPv4", 01:02:59.739 "traddr": "10.0.0.3", 01:02:59.739 "trsvcid": "4420", 01:02:59.739 "trtype": "TCP" 01:02:59.739 }, 01:02:59.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:02:59.739 "secure_channel": false, 01:02:59.739 "sock_impl": "ssl" 01:02:59.739 } 01:02:59.739 } 01:02:59.739 ] 01:02:59.739 } 01:02:59.739 ] 01:02:59.739 }' 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83713 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83713 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83713 ']' 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:59.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:02:59.739 06:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:02:59.739 [2024-12-09 06:01:54.262736] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:02:59.739 [2024-12-09 06:01:54.262821] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:00.007 [2024-12-09 06:01:54.402414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:00.007 [2024-12-09 06:01:54.428663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:00.007 [2024-12-09 06:01:54.428741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:00.007 [2024-12-09 06:01:54.428766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:00.007 [2024-12-09 06:01:54.428773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:00.007 [2024-12-09 06:01:54.428780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:00.007 [2024-12-09 06:01:54.429146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:00.287 [2024-12-09 06:01:54.615550] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:00.287 [2024-12-09 06:01:54.647510] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:03:00.287 [2024-12-09 06:01:54.647709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:00.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=83757 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 83757 /var/tmp/bdevperf.sock 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83757 ']' 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:03:00.868 06:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 01:03:00.868 "subsystems": [ 01:03:00.868 { 01:03:00.868 "subsystem": "keyring", 01:03:00.868 "config": [ 01:03:00.868 { 01:03:00.868 "method": "keyring_file_add_key", 01:03:00.868 "params": { 01:03:00.868 "name": "key0", 01:03:00.868 "path": "/tmp/tmp.2oyLjadhV8" 01:03:00.868 } 01:03:00.868 } 01:03:00.868 ] 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "subsystem": "iobuf", 01:03:00.868 "config": [ 01:03:00.868 { 01:03:00.868 "method": "iobuf_set_options", 01:03:00.868 "params": { 01:03:00.868 "enable_numa": false, 01:03:00.868 "large_bufsize": 135168, 01:03:00.868 "large_pool_count": 1024, 01:03:00.868 "small_bufsize": 8192, 01:03:00.868 "small_pool_count": 8192 01:03:00.868 } 01:03:00.868 } 01:03:00.868 ] 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "subsystem": "sock", 01:03:00.868 "config": [ 01:03:00.868 { 01:03:00.868 "method": "sock_set_default_impl", 01:03:00.868 "params": { 01:03:00.868 "impl_name": "posix" 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "sock_impl_set_options", 01:03:00.868 "params": { 01:03:00.868 "enable_ktls": false, 01:03:00.868 "enable_placement_id": 0, 01:03:00.868 "enable_quickack": false, 01:03:00.868 "enable_recv_pipe": true, 01:03:00.868 "enable_zerocopy_send_client": false, 01:03:00.868 "enable_zerocopy_send_server": true, 01:03:00.868 "impl_name": "ssl", 01:03:00.868 "recv_buf_size": 4096, 01:03:00.868 "send_buf_size": 4096, 01:03:00.868 "tls_version": 0, 01:03:00.868 "zerocopy_threshold": 0 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "sock_impl_set_options", 01:03:00.868 "params": { 01:03:00.868 "enable_ktls": false, 01:03:00.868 "enable_placement_id": 0, 01:03:00.868 "enable_quickack": false, 01:03:00.868 "enable_recv_pipe": true, 01:03:00.868 "enable_zerocopy_send_client": false, 01:03:00.868 "enable_zerocopy_send_server": true, 01:03:00.868 "impl_name": "posix", 01:03:00.868 "recv_buf_size": 2097152, 01:03:00.868 "send_buf_size": 2097152, 01:03:00.868 "tls_version": 0, 01:03:00.868 "zerocopy_threshold": 0 01:03:00.868 } 01:03:00.868 } 01:03:00.868 ] 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "subsystem": "vmd", 01:03:00.868 "config": [] 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "subsystem": "accel", 01:03:00.868 "config": [ 01:03:00.868 { 01:03:00.868 "method": "accel_set_options", 01:03:00.868 "params": { 01:03:00.868 "buf_count": 2048, 01:03:00.868 "large_cache_size": 16, 01:03:00.868 "sequence_count": 2048, 01:03:00.868 "small_cache_size": 128, 01:03:00.868 "task_count": 2048 01:03:00.868 } 01:03:00.868 } 01:03:00.868 ] 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "subsystem": "bdev", 01:03:00.868 "config": [ 01:03:00.868 { 01:03:00.868 "method": "bdev_set_options", 01:03:00.868 "params": { 01:03:00.868 "bdev_auto_examine": true, 01:03:00.868 "bdev_io_cache_size": 256, 01:03:00.868 "bdev_io_pool_size": 65535, 01:03:00.868 "iobuf_large_cache_size": 16, 01:03:00.868 "iobuf_small_cache_size": 128 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "bdev_raid_set_options", 01:03:00.868 "params": { 01:03:00.868 "process_max_bandwidth_mb_sec": 0, 01:03:00.868 "process_window_size_kb": 1024 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "bdev_iscsi_set_options", 01:03:00.868 "params": { 01:03:00.868 "timeout_sec": 30 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "bdev_nvme_set_options", 01:03:00.868 "params": { 01:03:00.868 "action_on_timeout": "none", 01:03:00.868 "allow_accel_sequence": false, 01:03:00.868 "arbitration_burst": 0, 01:03:00.868 "bdev_retry_count": 3, 01:03:00.868 "ctrlr_loss_timeout_sec": 0, 01:03:00.868 "delay_cmd_submit": true, 01:03:00.868 "dhchap_dhgroups": [ 01:03:00.868 "null", 01:03:00.868 "ffdhe2048", 01:03:00.868 "ffdhe3072", 01:03:00.868 "ffdhe4096", 01:03:00.868 "ffdhe6144", 01:03:00.868 "ffdhe8192" 01:03:00.868 ], 01:03:00.868 "dhchap_digests": [ 01:03:00.868 "sha256", 01:03:00.868 "sha384", 01:03:00.868 "sha512" 01:03:00.868 ], 01:03:00.868 "disable_auto_failback": false, 01:03:00.868 "fast_io_fail_timeout_sec": 0, 01:03:00.868 "generate_uuids": false, 01:03:00.868 "high_priority_weight": 0, 01:03:00.868 "io_path_stat": false, 01:03:00.868 "io_queue_requests": 512, 01:03:00.868 "keep_alive_timeout_ms": 10000, 01:03:00.868 "low_priority_weight": 0, 01:03:00.868 "medium_priority_weight": 0, 01:03:00.868 "nvme_adminq_poll_period_us": 10000, 01:03:00.868 "nvme_error_stat": false, 01:03:00.868 "nvme_ioq_poll_period_us": 0, 01:03:00.868 "rdma_cm_event_timeout_ms": 0, 01:03:00.868 "rdma_max_cq_size": 0, 01:03:00.868 "rdma_srq_size": 0, 01:03:00.868 "reconnect_delay_sec": 0, 01:03:00.868 "timeout_admin_us": 0, 01:03:00.868 "timeout_us": 0, 01:03:00.868 "transport_ack_timeout": 0, 01:03:00.868 "transport_retry_count": 4, 01:03:00.868 "transport_tos": 0 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "bdev_nvme_attach_controller", 01:03:00.868 "params": { 01:03:00.868 "adrfam": "IPv4", 01:03:00.868 "ctrlr_loss_timeout_sec": 0, 01:03:00.868 "ddgst": false, 01:03:00.868 "fast_io_fail_timeout_sec": 0, 01:03:00.868 "hdgst": false, 01:03:00.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:03:00.868 "multipath": "multipath", 01:03:00.868 "name": "nvme0", 01:03:00.868 "prchk_guard": false, 01:03:00.868 "prchk_reftag": false, 01:03:00.868 "psk": "key0", 01:03:00.868 "reconnect_delay_sec": 0, 01:03:00.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:00.868 "traddr": "10.0.0.3", 01:03:00.868 "trsvcid": "4420", 01:03:00.868 "trtype": "TCP" 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "bdev_nvme_set_hotplug", 01:03:00.868 "params": { 01:03:00.868 "enable": false, 01:03:00.868 "period_us": 100000 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "bdev_enable_histogram", 01:03:00.868 "params": { 01:03:00.868 "enable": true, 01:03:00.868 "name": "nvme0n1" 01:03:00.868 } 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "method": "bdev_wait_for_examine" 01:03:00.868 } 01:03:00.868 ] 01:03:00.868 }, 01:03:00.868 { 01:03:00.868 "subsystem": "nbd", 01:03:00.868 "config": [] 01:03:00.868 } 01:03:00.868 ] 01:03:00.868 }' 01:03:00.868 [2024-12-09 06:01:55.421138] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:00.868 [2024-12-09 06:01:55.421776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83757 ] 01:03:01.126 [2024-12-09 06:01:55.574309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:01.126 [2024-12-09 06:01:55.612742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:01.383 [2024-12-09 06:01:55.753596] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:03:01.948 06:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:01.948 06:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 01:03:01.948 06:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:03:01.948 06:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 01:03:02.207 06:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:03:02.207 06:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:03:02.207 Running I/O for 1 seconds... 01:03:03.586 4736.00 IOPS, 18.50 MiB/s 01:03:03.586 Latency(us) 01:03:03.586 [2024-12-09T06:01:58.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:03.586 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 01:03:03.586 Verification LBA range: start 0x0 length 0x2000 01:03:03.586 nvme0n1 : 1.02 4788.17 18.70 0.00 0.00 26486.62 6076.97 16205.27 01:03:03.586 [2024-12-09T06:01:58.172Z] =================================================================================================================== 01:03:03.586 [2024-12-09T06:01:58.172Z] Total : 4788.17 18.70 0.00 0.00 26486.62 6076.97 16205.27 01:03:03.586 { 01:03:03.586 "results": [ 01:03:03.586 { 01:03:03.586 "job": "nvme0n1", 01:03:03.586 "core_mask": "0x2", 01:03:03.586 "workload": "verify", 01:03:03.586 "status": "finished", 01:03:03.586 "verify_range": { 01:03:03.586 "start": 0, 01:03:03.586 "length": 8192 01:03:03.586 }, 01:03:03.586 "queue_depth": 128, 01:03:03.586 "io_size": 4096, 01:03:03.586 "runtime": 1.015836, 01:03:03.586 "iops": 4788.174469107218, 01:03:03.586 "mibps": 18.70380651995007, 01:03:03.586 "io_failed": 0, 01:03:03.586 "io_timeout": 0, 01:03:03.586 "avg_latency_us": 26486.623540669854, 01:03:03.586 "min_latency_us": 6076.9745454545455, 01:03:03.586 "max_latency_us": 16205.265454545455 01:03:03.586 } 01:03:03.586 ], 01:03:03.586 "core_count": 1 01:03:03.586 } 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:03:03.586 nvmf_trace.0 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 83757 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83757 ']' 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83757 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83757 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:03.586 killing process with pid 83757 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83757' 01:03:03.586 Received shutdown signal, test time was about 1.000000 seconds 01:03:03.586 01:03:03.586 Latency(us) 01:03:03.586 [2024-12-09T06:01:58.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:03.586 [2024-12-09T06:01:58.172Z] =================================================================================================================== 01:03:03.586 [2024-12-09T06:01:58.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83757 01:03:03.586 06:01:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83757 01:03:03.586 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 01:03:03.586 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:03.586 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 01:03:03.586 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:03.586 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 01:03:03.586 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:03.586 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:03.586 rmmod nvme_tcp 01:03:03.586 rmmod nvme_fabrics 01:03:03.586 rmmod nvme_keyring 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 83713 ']' 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 83713 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83713 ']' 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83713 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83713 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:03.846 killing process with pid 83713 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83713' 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83713 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83713 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:03.846 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.9KorrCRyuo /tmp/tmp.pAF3beNQfE /tmp/tmp.2oyLjadhV8 01:03:04.106 01:03:04.106 real 1m21.041s 01:03:04.106 user 2m11.724s 01:03:04.106 sys 0m26.210s 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 01:03:04.106 ************************************ 01:03:04.106 END TEST nvmf_tls 01:03:04.106 ************************************ 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:03:04.106 ************************************ 01:03:04.106 START TEST nvmf_fips 01:03:04.106 ************************************ 01:03:04.106 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 01:03:04.366 * Looking for test storage... 01:03:04.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:04.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:04.367 --rc genhtml_branch_coverage=1 01:03:04.367 --rc genhtml_function_coverage=1 01:03:04.367 --rc genhtml_legend=1 01:03:04.367 --rc geninfo_all_blocks=1 01:03:04.367 --rc geninfo_unexecuted_blocks=1 01:03:04.367 01:03:04.367 ' 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:04.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:04.367 --rc genhtml_branch_coverage=1 01:03:04.367 --rc genhtml_function_coverage=1 01:03:04.367 --rc genhtml_legend=1 01:03:04.367 --rc geninfo_all_blocks=1 01:03:04.367 --rc geninfo_unexecuted_blocks=1 01:03:04.367 01:03:04.367 ' 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:04.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:04.367 --rc genhtml_branch_coverage=1 01:03:04.367 --rc genhtml_function_coverage=1 01:03:04.367 --rc genhtml_legend=1 01:03:04.367 --rc geninfo_all_blocks=1 01:03:04.367 --rc geninfo_unexecuted_blocks=1 01:03:04.367 01:03:04.367 ' 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:04.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:04.367 --rc genhtml_branch_coverage=1 01:03:04.367 --rc genhtml_function_coverage=1 01:03:04.367 --rc genhtml_legend=1 01:03:04.367 --rc geninfo_all_blocks=1 01:03:04.367 --rc geninfo_unexecuted_blocks=1 01:03:04.367 01:03:04.367 ' 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:04.367 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:04.368 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 01:03:04.368 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 01:03:04.628 06:01:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 01:03:04.628 Error setting digest 01:03:04.628 4082D188E27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 01:03:04.628 4082D188E27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:04.628 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:04.629 Cannot find device "nvmf_init_br" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:04.629 Cannot find device "nvmf_init_br2" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:04.629 Cannot find device "nvmf_tgt_br" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:04.629 Cannot find device "nvmf_tgt_br2" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:04.629 Cannot find device "nvmf_init_br" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:04.629 Cannot find device "nvmf_init_br2" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:04.629 Cannot find device "nvmf_tgt_br" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:04.629 Cannot find device "nvmf_tgt_br2" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:04.629 Cannot find device "nvmf_br" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:04.629 Cannot find device "nvmf_init_if" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:04.629 Cannot find device "nvmf_init_if2" 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:04.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:04.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:04.629 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:04.888 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:04.889 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:04.889 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 01:03:04.889 01:03:04.889 --- 10.0.0.3 ping statistics --- 01:03:04.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:04.889 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:04.889 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:04.889 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 01:03:04.889 01:03:04.889 --- 10.0.0.4 ping statistics --- 01:03:04.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:04.889 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:04.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:04.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 01:03:04.889 01:03:04.889 --- 10.0.0.1 ping statistics --- 01:03:04.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:04.889 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:04.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:04.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 01:03:04.889 01:03:04.889 --- 10.0.0.2 ping statistics --- 01:03:04.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:04.889 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=84088 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 84088 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84088 ']' 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:04.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:04.889 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:03:05.148 [2024-12-09 06:01:59.513391] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:05.148 [2024-12-09 06:01:59.513485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:05.148 [2024-12-09 06:01:59.666055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:05.148 [2024-12-09 06:01:59.704169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:05.148 [2024-12-09 06:01:59.704226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:05.148 [2024-12-09 06:01:59.704241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:05.148 [2024-12-09 06:01:59.704251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:05.148 [2024-12-09 06:01:59.704260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:05.148 [2024-12-09 06:01:59.704617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.jBg 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.jBg 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.jBg 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.jBg 01:03:05.407 06:01:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:03:05.666 [2024-12-09 06:02:00.068894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:05.666 [2024-12-09 06:02:00.084881] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:03:05.666 [2024-12-09 06:02:00.085108] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:05.666 malloc0 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84129 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84129 /var/tmp/bdevperf.sock 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84129 ']' 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:05.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:05.666 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:03:05.666 [2024-12-09 06:02:00.232221] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:05.666 [2024-12-09 06:02:00.232313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84129 ] 01:03:05.925 [2024-12-09 06:02:00.382974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:05.925 [2024-12-09 06:02:00.421449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:05.925 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:05.925 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 01:03:05.925 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.jBg 01:03:06.493 06:02:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 01:03:06.493 [2024-12-09 06:02:01.059978] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:03:06.752 TLSTESTn1 01:03:06.752 06:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:03:06.752 Running I/O for 10 seconds... 01:03:09.062 4743.00 IOPS, 18.53 MiB/s [2024-12-09T06:02:04.585Z] 4812.50 IOPS, 18.80 MiB/s [2024-12-09T06:02:05.521Z] 4820.33 IOPS, 18.83 MiB/s [2024-12-09T06:02:06.456Z] 4825.25 IOPS, 18.85 MiB/s [2024-12-09T06:02:07.393Z] 4828.20 IOPS, 18.86 MiB/s [2024-12-09T06:02:08.331Z] 4837.83 IOPS, 18.90 MiB/s [2024-12-09T06:02:09.711Z] 4841.00 IOPS, 18.91 MiB/s [2024-12-09T06:02:10.648Z] 4842.50 IOPS, 18.92 MiB/s [2024-12-09T06:02:11.587Z] 4837.44 IOPS, 18.90 MiB/s [2024-12-09T06:02:11.587Z] 4845.60 IOPS, 18.93 MiB/s 01:03:17.001 Latency(us) 01:03:17.001 [2024-12-09T06:02:11.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:17.001 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:03:17.001 Verification LBA range: start 0x0 length 0x2000 01:03:17.001 TLSTESTn1 : 10.01 4851.50 18.95 0.00 0.00 26338.68 4408.79 34793.66 01:03:17.001 [2024-12-09T06:02:11.587Z] =================================================================================================================== 01:03:17.001 [2024-12-09T06:02:11.587Z] Total : 4851.50 18.95 0.00 0.00 26338.68 4408.79 34793.66 01:03:17.001 { 01:03:17.001 "results": [ 01:03:17.001 { 01:03:17.001 "job": "TLSTESTn1", 01:03:17.001 "core_mask": "0x4", 01:03:17.001 "workload": "verify", 01:03:17.001 "status": "finished", 01:03:17.001 "verify_range": { 01:03:17.001 "start": 0, 01:03:17.001 "length": 8192 01:03:17.001 }, 01:03:17.001 "queue_depth": 128, 01:03:17.001 "io_size": 4096, 01:03:17.001 "runtime": 10.014024, 01:03:17.001 "iops": 4851.496261642672, 01:03:17.001 "mibps": 18.951157272041687, 01:03:17.001 "io_failed": 0, 01:03:17.001 "io_timeout": 0, 01:03:17.001 "avg_latency_us": 26338.676545667866, 01:03:17.001 "min_latency_us": 4408.785454545455, 01:03:17.001 "max_latency_us": 34793.65818181818 01:03:17.001 } 01:03:17.001 ], 01:03:17.001 "core_count": 1 01:03:17.001 } 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:03:17.001 nvmf_trace.0 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84129 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84129 ']' 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84129 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84129 01:03:17.001 killing process with pid 84129 01:03:17.001 Received shutdown signal, test time was about 10.000000 seconds 01:03:17.001 01:03:17.001 Latency(us) 01:03:17.001 [2024-12-09T06:02:11.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:17.001 [2024-12-09T06:02:11.587Z] =================================================================================================================== 01:03:17.001 [2024-12-09T06:02:11.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84129' 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84129 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84129 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:17.001 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:17.262 rmmod nvme_tcp 01:03:17.262 rmmod nvme_fabrics 01:03:17.262 rmmod nvme_keyring 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 84088 ']' 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 84088 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84088 ']' 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84088 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84088 01:03:17.262 killing process with pid 84088 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84088' 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84088 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84088 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:17.262 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:17.521 06:02:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.jBg 01:03:17.521 01:03:17.521 real 0m13.418s 01:03:17.521 user 0m18.225s 01:03:17.521 sys 0m5.755s 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:17.521 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 01:03:17.521 ************************************ 01:03:17.521 END TEST nvmf_fips 01:03:17.521 ************************************ 01:03:17.782 06:02:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 01:03:17.782 06:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:17.782 06:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:17.782 06:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:03:17.782 ************************************ 01:03:17.782 START TEST nvmf_control_msg_list 01:03:17.782 ************************************ 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 01:03:17.783 * Looking for test storage... 01:03:17.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:17.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:17.783 --rc genhtml_branch_coverage=1 01:03:17.783 --rc genhtml_function_coverage=1 01:03:17.783 --rc genhtml_legend=1 01:03:17.783 --rc geninfo_all_blocks=1 01:03:17.783 --rc geninfo_unexecuted_blocks=1 01:03:17.783 01:03:17.783 ' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:17.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:17.783 --rc genhtml_branch_coverage=1 01:03:17.783 --rc genhtml_function_coverage=1 01:03:17.783 --rc genhtml_legend=1 01:03:17.783 --rc geninfo_all_blocks=1 01:03:17.783 --rc geninfo_unexecuted_blocks=1 01:03:17.783 01:03:17.783 ' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:17.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:17.783 --rc genhtml_branch_coverage=1 01:03:17.783 --rc genhtml_function_coverage=1 01:03:17.783 --rc genhtml_legend=1 01:03:17.783 --rc geninfo_all_blocks=1 01:03:17.783 --rc geninfo_unexecuted_blocks=1 01:03:17.783 01:03:17.783 ' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:17.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:17.783 --rc genhtml_branch_coverage=1 01:03:17.783 --rc genhtml_function_coverage=1 01:03:17.783 --rc genhtml_legend=1 01:03:17.783 --rc geninfo_all_blocks=1 01:03:17.783 --rc geninfo_unexecuted_blocks=1 01:03:17.783 01:03:17.783 ' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:17.783 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:17.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:17.784 Cannot find device "nvmf_init_br" 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:17.784 Cannot find device "nvmf_init_br2" 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:17.784 Cannot find device "nvmf_tgt_br" 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 01:03:17.784 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:18.045 Cannot find device "nvmf_tgt_br2" 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:18.045 Cannot find device "nvmf_init_br" 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:18.045 Cannot find device "nvmf_init_br2" 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:18.045 Cannot find device "nvmf_tgt_br" 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:18.045 Cannot find device "nvmf_tgt_br2" 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:18.045 Cannot find device "nvmf_br" 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:18.045 Cannot find device "nvmf_init_if" 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:18.045 Cannot find device "nvmf_init_if2" 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:18.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:18.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:18.045 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:18.305 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:18.305 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 01:03:18.305 01:03:18.305 --- 10.0.0.3 ping statistics --- 01:03:18.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:18.305 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:18.305 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:18.305 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 01:03:18.305 01:03:18.305 --- 10.0.0.4 ping statistics --- 01:03:18.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:18.305 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:18.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:18.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 01:03:18.305 01:03:18.305 --- 10.0.0.1 ping statistics --- 01:03:18.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:18.305 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:18.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:18.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 01:03:18.305 01:03:18.305 --- 10.0.0.2 ping statistics --- 01:03:18.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:18.305 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=84528 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 84528 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 84528 ']' 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:18.305 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:18.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:18.306 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:18.306 06:02:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:18.306 [2024-12-09 06:02:12.780452] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:18.306 [2024-12-09 06:02:12.780538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:18.565 [2024-12-09 06:02:12.934347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:18.565 [2024-12-09 06:02:12.972461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:18.565 [2024-12-09 06:02:12.972524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:18.565 [2024-12-09 06:02:12.972538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:18.565 [2024-12-09 06:02:12.972549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:18.565 [2024-12-09 06:02:12.972558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:18.565 [2024-12-09 06:02:12.972962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:18.565 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:18.565 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 01:03:18.565 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:18.565 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:18.566 [2024-12-09 06:02:13.115711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:18.566 Malloc0 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:18.566 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:18.825 [2024-12-09 06:02:13.151248] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:18.825 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:18.825 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=84559 01:03:18.825 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:03:18.825 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=84560 01:03:18.825 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:03:18.825 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=84561 01:03:18.825 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 84559 01:03:18.825 06:02:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:03:18.825 [2024-12-09 06:02:13.339568] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:03:18.825 [2024-12-09 06:02:13.339889] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:03:18.825 [2024-12-09 06:02:13.349632] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:03:20.203 Initializing NVMe Controllers 01:03:20.203 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:03:20.203 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 01:03:20.203 Initialization complete. Launching workers. 01:03:20.203 ======================================================== 01:03:20.203 Latency(us) 01:03:20.203 Device Information : IOPS MiB/s Average min max 01:03:20.203 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3783.97 14.78 263.99 167.82 810.08 01:03:20.203 ======================================================== 01:03:20.203 Total : 3783.97 14.78 263.99 167.82 810.08 01:03:20.203 01:03:20.203 Initializing NVMe Controllers 01:03:20.203 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:03:20.203 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 01:03:20.203 Initialization complete. Launching workers. 01:03:20.203 ======================================================== 01:03:20.203 Latency(us) 01:03:20.203 Device Information : IOPS MiB/s Average min max 01:03:20.203 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3781.00 14.77 264.23 177.48 811.78 01:03:20.203 ======================================================== 01:03:20.203 Total : 3781.00 14.77 264.23 177.48 811.78 01:03:20.203 01:03:20.203 Initializing NVMe Controllers 01:03:20.203 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:03:20.203 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 01:03:20.203 Initialization complete. Launching workers. 01:03:20.203 ======================================================== 01:03:20.203 Latency(us) 01:03:20.203 Device Information : IOPS MiB/s Average min max 01:03:20.203 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3808.00 14.88 262.26 110.39 810.76 01:03:20.203 ======================================================== 01:03:20.203 Total : 3808.00 14.88 262.26 110.39 810.76 01:03:20.203 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 84560 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 84561 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:20.203 rmmod nvme_tcp 01:03:20.203 rmmod nvme_fabrics 01:03:20.203 rmmod nvme_keyring 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 84528 ']' 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 84528 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 84528 ']' 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 84528 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84528 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:20.203 killing process with pid 84528 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84528' 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 84528 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 84528 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:20.203 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 01:03:20.464 01:03:20.464 real 0m2.768s 01:03:20.464 user 0m4.615s 01:03:20.464 sys 0m1.332s 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 01:03:20.464 ************************************ 01:03:20.464 END TEST nvmf_control_msg_list 01:03:20.464 ************************************ 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:03:20.464 ************************************ 01:03:20.464 START TEST nvmf_wait_for_buf 01:03:20.464 ************************************ 01:03:20.464 06:02:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 01:03:20.464 * Looking for test storage... 01:03:20.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:20.464 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:20.464 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 01:03:20.464 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:20.722 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:20.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:20.723 --rc genhtml_branch_coverage=1 01:03:20.723 --rc genhtml_function_coverage=1 01:03:20.723 --rc genhtml_legend=1 01:03:20.723 --rc geninfo_all_blocks=1 01:03:20.723 --rc geninfo_unexecuted_blocks=1 01:03:20.723 01:03:20.723 ' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:20.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:20.723 --rc genhtml_branch_coverage=1 01:03:20.723 --rc genhtml_function_coverage=1 01:03:20.723 --rc genhtml_legend=1 01:03:20.723 --rc geninfo_all_blocks=1 01:03:20.723 --rc geninfo_unexecuted_blocks=1 01:03:20.723 01:03:20.723 ' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:20.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:20.723 --rc genhtml_branch_coverage=1 01:03:20.723 --rc genhtml_function_coverage=1 01:03:20.723 --rc genhtml_legend=1 01:03:20.723 --rc geninfo_all_blocks=1 01:03:20.723 --rc geninfo_unexecuted_blocks=1 01:03:20.723 01:03:20.723 ' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:20.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:20.723 --rc genhtml_branch_coverage=1 01:03:20.723 --rc genhtml_function_coverage=1 01:03:20.723 --rc genhtml_legend=1 01:03:20.723 --rc geninfo_all_blocks=1 01:03:20.723 --rc geninfo_unexecuted_blocks=1 01:03:20.723 01:03:20.723 ' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:20.723 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:20.723 Cannot find device "nvmf_init_br" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:20.723 Cannot find device "nvmf_init_br2" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:20.723 Cannot find device "nvmf_tgt_br" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:20.723 Cannot find device "nvmf_tgt_br2" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:20.723 Cannot find device "nvmf_init_br" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:20.723 Cannot find device "nvmf_init_br2" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:20.723 Cannot find device "nvmf_tgt_br" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:20.723 Cannot find device "nvmf_tgt_br2" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:20.723 Cannot find device "nvmf_br" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:20.723 Cannot find device "nvmf_init_if" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:20.723 Cannot find device "nvmf_init_if2" 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:20.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:20.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:20.723 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:20.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:20.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 01:03:20.982 01:03:20.982 --- 10.0.0.3 ping statistics --- 01:03:20.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:20.982 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:20.982 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:20.982 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 01:03:20.982 01:03:20.982 --- 10.0.0.4 ping statistics --- 01:03:20.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:20.982 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:20.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:20.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 01:03:20.982 01:03:20.982 --- 10.0.0.1 ping statistics --- 01:03:20.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:20.982 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:20.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:20.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 01:03:20.982 01:03:20.982 --- 10.0.0.2 ping statistics --- 01:03:20.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:20.982 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:20.982 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=84794 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 84794 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 84794 ']' 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:21.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:21.240 06:02:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:21.240 [2024-12-09 06:02:15.630889] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:21.240 [2024-12-09 06:02:15.631007] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:21.240 [2024-12-09 06:02:15.771973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:21.240 [2024-12-09 06:02:15.799362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:21.240 [2024-12-09 06:02:15.799425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:21.240 [2024-12-09 06:02:15.799449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:21.240 [2024-12-09 06:02:15.799456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:21.240 [2024-12-09 06:02:15.799462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:21.240 [2024-12-09 06:02:15.799749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.174 Malloc0 01:03:22.174 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.175 [2024-12-09 06:02:16.712568] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:22.175 [2024-12-09 06:02:16.736692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:22.175 06:02:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:03:22.434 [2024-12-09 06:02:16.931860] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:03:23.811 Initializing NVMe Controllers 01:03:23.811 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 01:03:23.812 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 01:03:23.812 Initialization complete. Launching workers. 01:03:23.812 ======================================================== 01:03:23.812 Latency(us) 01:03:23.812 Device Information : IOPS MiB/s Average min max 01:03:23.812 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.87 15.98 32359.77 7158.22 62047.81 01:03:23.812 ======================================================== 01:03:23.812 Total : 127.87 15.98 32359.77 7158.22 62047.81 01:03:23.812 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:23.812 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:23.812 rmmod nvme_tcp 01:03:23.812 rmmod nvme_fabrics 01:03:24.071 rmmod nvme_keyring 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 84794 ']' 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 84794 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 84794 ']' 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 84794 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84794 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:24.071 killing process with pid 84794 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84794' 01:03:24.071 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 84794 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 84794 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:24.072 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 01:03:24.332 01:03:24.332 real 0m3.923s 01:03:24.332 user 0m3.514s 01:03:24.332 sys 0m0.748s 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 01:03:24.332 ************************************ 01:03:24.332 END TEST nvmf_wait_for_buf 01:03:24.332 ************************************ 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:24.332 06:02:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:03:24.592 ************************************ 01:03:24.592 START TEST nvmf_nsid 01:03:24.592 ************************************ 01:03:24.592 06:02:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 01:03:24.592 * Looking for test storage... 01:03:24.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:24.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:24.592 --rc genhtml_branch_coverage=1 01:03:24.592 --rc genhtml_function_coverage=1 01:03:24.592 --rc genhtml_legend=1 01:03:24.592 --rc geninfo_all_blocks=1 01:03:24.592 --rc geninfo_unexecuted_blocks=1 01:03:24.592 01:03:24.592 ' 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:24.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:24.592 --rc genhtml_branch_coverage=1 01:03:24.592 --rc genhtml_function_coverage=1 01:03:24.592 --rc genhtml_legend=1 01:03:24.592 --rc geninfo_all_blocks=1 01:03:24.592 --rc geninfo_unexecuted_blocks=1 01:03:24.592 01:03:24.592 ' 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:24.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:24.592 --rc genhtml_branch_coverage=1 01:03:24.592 --rc genhtml_function_coverage=1 01:03:24.592 --rc genhtml_legend=1 01:03:24.592 --rc geninfo_all_blocks=1 01:03:24.592 --rc geninfo_unexecuted_blocks=1 01:03:24.592 01:03:24.592 ' 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:24.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:24.592 --rc genhtml_branch_coverage=1 01:03:24.592 --rc genhtml_function_coverage=1 01:03:24.592 --rc genhtml_legend=1 01:03:24.592 --rc geninfo_all_blocks=1 01:03:24.592 --rc geninfo_unexecuted_blocks=1 01:03:24.592 01:03:24.592 ' 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:24.592 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:24.593 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:24.593 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:24.853 Cannot find device "nvmf_init_br" 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:24.853 Cannot find device "nvmf_init_br2" 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:24.853 Cannot find device "nvmf_tgt_br" 01:03:24.853 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:24.854 Cannot find device "nvmf_tgt_br2" 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:24.854 Cannot find device "nvmf_init_br" 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:24.854 Cannot find device "nvmf_init_br2" 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:24.854 Cannot find device "nvmf_tgt_br" 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:24.854 Cannot find device "nvmf_tgt_br2" 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:24.854 Cannot find device "nvmf_br" 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:24.854 Cannot find device "nvmf_init_if" 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:24.854 Cannot find device "nvmf_init_if2" 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:24.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:24.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:24.854 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:25.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:25.117 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 01:03:25.117 01:03:25.117 --- 10.0.0.3 ping statistics --- 01:03:25.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:25.117 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:25.117 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:25.117 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 01:03:25.117 01:03:25.117 --- 10.0.0.4 ping statistics --- 01:03:25.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:25.117 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:25.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:25.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 01:03:25.117 01:03:25.117 --- 10.0.0.1 ping statistics --- 01:03:25.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:25.117 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:25.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:25.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 01:03:25.117 01:03:25.117 --- 10.0.0.2 ping statistics --- 01:03:25.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:25.117 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=85083 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 85083 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85083 ']' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:25.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:25.117 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:03:25.117 [2024-12-09 06:02:19.668594] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:25.117 [2024-12-09 06:02:19.668734] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:25.377 [2024-12-09 06:02:19.815999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:25.377 [2024-12-09 06:02:19.843810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:25.377 [2024-12-09 06:02:19.843859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:25.377 [2024-12-09 06:02:19.843885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:25.377 [2024-12-09 06:02:19.843892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:25.377 [2024-12-09 06:02:19.843899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:25.377 [2024-12-09 06:02:19.844211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:25.377 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:25.377 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 01:03:25.377 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:25.377 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:25.377 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=85115 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 01:03:25.636 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=76590364-5665-41e5-994b-194ce61869ef 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=517ee83e-09af-4c67-a0ca-33ef8170eac7 01:03:25.637 06:02:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=3faa30af-a55a-4aa7-a664-9589391b5ab3 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:03:25.637 null0 01:03:25.637 null1 01:03:25.637 null2 01:03:25.637 [2024-12-09 06:02:20.045803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:25.637 [2024-12-09 06:02:20.052712] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:25.637 [2024-12-09 06:02:20.052810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85115 ] 01:03:25.637 [2024-12-09 06:02:20.069908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 85115 /var/tmp/tgt2.sock 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85115 ']' 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:25.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:25.637 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:03:25.637 [2024-12-09 06:02:20.211355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:25.896 [2024-12-09 06:02:20.250908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:25.896 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:25.896 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 01:03:25.896 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 01:03:26.464 [2024-12-09 06:02:20.851451] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:26.464 [2024-12-09 06:02:20.867526] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 01:03:26.464 nvme0n1 nvme0n2 01:03:26.464 nvme1n1 01:03:26.464 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 01:03:26.464 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 01:03:26.464 06:02:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 01:03:26.723 06:02:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 76590364-5665-41e5-994b-194ce61869ef 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=76590364566541e5994b194ce61869ef 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 76590364566541E5994B194CE61869EF 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 76590364566541E5994B194CE61869EF == \7\6\5\9\0\3\6\4\5\6\6\5\4\1\E\5\9\9\4\B\1\9\4\C\E\6\1\8\6\9\E\F ]] 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:03:27.660 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 517ee83e-09af-4c67-a0ca-33ef8170eac7 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 01:03:27.661 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=517ee83e09af4c67a0ca33ef8170eac7 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 517EE83E09AF4C67A0CA33EF8170EAC7 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 517EE83E09AF4C67A0CA33EF8170EAC7 == \5\1\7\E\E\8\3\E\0\9\A\F\4\C\6\7\A\0\C\A\3\3\E\F\8\1\7\0\E\A\C\7 ]] 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 3faa30af-a55a-4aa7-a664-9589391b5ab3 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3faa30afa55a4aa7a6649589391b5ab3 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3FAA30AFA55A4AA7A6649589391B5AB3 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 3FAA30AFA55A4AA7A6649589391B5AB3 == \3\F\A\A\3\0\A\F\A\5\5\A\4\A\A\7\A\6\6\4\9\5\8\9\3\9\1\B\5\A\B\3 ]] 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 85115 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85115 ']' 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85115 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 01:03:27.920 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:28.179 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85115 01:03:28.180 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:28.180 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:28.180 killing process with pid 85115 01:03:28.180 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85115' 01:03:28.180 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85115 01:03:28.180 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85115 01:03:28.180 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 01:03:28.180 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:28.180 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:28.439 rmmod nvme_tcp 01:03:28.439 rmmod nvme_fabrics 01:03:28.439 rmmod nvme_keyring 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 85083 ']' 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 85083 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85083 ']' 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85083 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85083 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:28.439 killing process with pid 85083 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85083' 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85083 01:03:28.439 06:02:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85083 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:28.698 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:28.699 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:28.958 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 01:03:28.958 01:03:28.958 real 0m4.393s 01:03:28.958 user 0m6.887s 01:03:28.958 sys 0m1.206s 01:03:28.958 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:28.958 06:02:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 01:03:28.958 ************************************ 01:03:28.958 END TEST nvmf_nsid 01:03:28.958 ************************************ 01:03:28.958 06:02:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:03:28.958 01:03:28.958 real 6m56.064s 01:03:28.958 user 16m51.273s 01:03:28.958 sys 1m21.702s 01:03:28.958 06:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:28.958 06:02:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 01:03:28.958 ************************************ 01:03:28.958 END TEST nvmf_target_extra 01:03:28.958 ************************************ 01:03:28.958 06:02:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 01:03:28.958 06:02:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:28.958 06:02:23 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:28.958 06:02:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:03:28.958 ************************************ 01:03:28.958 START TEST nvmf_host 01:03:28.958 ************************************ 01:03:28.958 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 01:03:28.958 * Looking for test storage... 01:03:28.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:03:28.958 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:28.958 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 01:03:28.958 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:29.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.219 --rc genhtml_branch_coverage=1 01:03:29.219 --rc genhtml_function_coverage=1 01:03:29.219 --rc genhtml_legend=1 01:03:29.219 --rc geninfo_all_blocks=1 01:03:29.219 --rc geninfo_unexecuted_blocks=1 01:03:29.219 01:03:29.219 ' 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:29.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.219 --rc genhtml_branch_coverage=1 01:03:29.219 --rc genhtml_function_coverage=1 01:03:29.219 --rc genhtml_legend=1 01:03:29.219 --rc geninfo_all_blocks=1 01:03:29.219 --rc geninfo_unexecuted_blocks=1 01:03:29.219 01:03:29.219 ' 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:29.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.219 --rc genhtml_branch_coverage=1 01:03:29.219 --rc genhtml_function_coverage=1 01:03:29.219 --rc genhtml_legend=1 01:03:29.219 --rc geninfo_all_blocks=1 01:03:29.219 --rc geninfo_unexecuted_blocks=1 01:03:29.219 01:03:29.219 ' 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:29.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.219 --rc genhtml_branch_coverage=1 01:03:29.219 --rc genhtml_function_coverage=1 01:03:29.219 --rc genhtml_legend=1 01:03:29.219 --rc geninfo_all_blocks=1 01:03:29.219 --rc geninfo_unexecuted_blocks=1 01:03:29.219 01:03:29.219 ' 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:29.219 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:29.220 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:03:29.220 ************************************ 01:03:29.220 START TEST nvmf_multicontroller 01:03:29.220 ************************************ 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 01:03:29.220 * Looking for test storage... 01:03:29.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:29.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.220 --rc genhtml_branch_coverage=1 01:03:29.220 --rc genhtml_function_coverage=1 01:03:29.220 --rc genhtml_legend=1 01:03:29.220 --rc geninfo_all_blocks=1 01:03:29.220 --rc geninfo_unexecuted_blocks=1 01:03:29.220 01:03:29.220 ' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:29.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.220 --rc genhtml_branch_coverage=1 01:03:29.220 --rc genhtml_function_coverage=1 01:03:29.220 --rc genhtml_legend=1 01:03:29.220 --rc geninfo_all_blocks=1 01:03:29.220 --rc geninfo_unexecuted_blocks=1 01:03:29.220 01:03:29.220 ' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:29.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.220 --rc genhtml_branch_coverage=1 01:03:29.220 --rc genhtml_function_coverage=1 01:03:29.220 --rc genhtml_legend=1 01:03:29.220 --rc geninfo_all_blocks=1 01:03:29.220 --rc geninfo_unexecuted_blocks=1 01:03:29.220 01:03:29.220 ' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:29.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:29.220 --rc genhtml_branch_coverage=1 01:03:29.220 --rc genhtml_function_coverage=1 01:03:29.220 --rc genhtml_legend=1 01:03:29.220 --rc geninfo_all_blocks=1 01:03:29.220 --rc geninfo_unexecuted_blocks=1 01:03:29.220 01:03:29.220 ' 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:29.220 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:29.221 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:29.221 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:29.221 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:29.480 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 01:03:29.480 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:29.481 Cannot find device "nvmf_init_br" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:29.481 Cannot find device "nvmf_init_br2" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:29.481 Cannot find device "nvmf_tgt_br" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:29.481 Cannot find device "nvmf_tgt_br2" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:29.481 Cannot find device "nvmf_init_br" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:29.481 Cannot find device "nvmf_init_br2" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:29.481 Cannot find device "nvmf_tgt_br" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:29.481 Cannot find device "nvmf_tgt_br2" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:29.481 Cannot find device "nvmf_br" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:29.481 Cannot find device "nvmf_init_if" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:29.481 Cannot find device "nvmf_init_if2" 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:29.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:29.481 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:29.481 06:02:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:29.481 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:29.481 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:29.481 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:29.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:29.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 01:03:29.740 01:03:29.740 --- 10.0.0.3 ping statistics --- 01:03:29.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:29.740 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:29.740 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:29.740 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 01:03:29.740 01:03:29.740 --- 10.0.0.4 ping statistics --- 01:03:29.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:29.740 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:29.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:29.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 01:03:29.740 01:03:29.740 --- 10.0.0.1 ping statistics --- 01:03:29.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:29.740 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:29.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:29.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 01:03:29.740 01:03:29.740 --- 10.0.0.2 ping statistics --- 01:03:29.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:29.740 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:29.740 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:29.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=85483 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 85483 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 85483 ']' 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:29.741 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.000 [2024-12-09 06:02:24.328091] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:30.000 [2024-12-09 06:02:24.328208] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:30.000 [2024-12-09 06:02:24.483121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:03:30.000 [2024-12-09 06:02:24.523738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:30.000 [2024-12-09 06:02:24.524033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:30.000 [2024-12-09 06:02:24.524198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:30.001 [2024-12-09 06:02:24.524359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:30.001 [2024-12-09 06:02:24.524404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:30.001 [2024-12-09 06:02:24.525412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:30.001 [2024-12-09 06:02:24.525563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:30.001 [2024-12-09 06:02:24.525555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 [2024-12-09 06:02:24.674433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 Malloc0 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 [2024-12-09 06:02:24.732667] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 [2024-12-09 06:02:24.740543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 Malloc1 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.260 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85517 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85517 /var/tmp/bdevperf.sock 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 85517 ']' 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:03:30.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:30.261 06:02:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.829 NVMe0n1 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.829 1 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.829 2024/12/09 06:02:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:03:30.829 request: 01:03:30.829 { 01:03:30.829 "method": "bdev_nvme_attach_controller", 01:03:30.829 "params": { 01:03:30.829 "name": "NVMe0", 01:03:30.829 "trtype": "tcp", 01:03:30.829 "traddr": "10.0.0.3", 01:03:30.829 "adrfam": "ipv4", 01:03:30.829 "trsvcid": "4420", 01:03:30.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:30.829 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 01:03:30.829 "hostaddr": "10.0.0.1", 01:03:30.829 "prchk_reftag": false, 01:03:30.829 "prchk_guard": false, 01:03:30.829 "hdgst": false, 01:03:30.829 "ddgst": false, 01:03:30.829 "allow_unrecognized_csi": false 01:03:30.829 } 01:03:30.829 } 01:03:30.829 Got JSON-RPC error response 01:03:30.829 GoRPCClient: error on JSON-RPC call 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.829 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.829 2024/12/09 06:02:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:03:30.829 request: 01:03:30.829 { 01:03:30.829 "method": "bdev_nvme_attach_controller", 01:03:30.829 "params": { 01:03:30.829 "name": "NVMe0", 01:03:30.829 "trtype": "tcp", 01:03:30.829 "traddr": "10.0.0.3", 01:03:30.829 "adrfam": "ipv4", 01:03:30.829 "trsvcid": "4420", 01:03:30.829 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:03:30.829 "hostaddr": "10.0.0.1", 01:03:30.829 "prchk_reftag": false, 01:03:30.829 "prchk_guard": false, 01:03:30.829 "hdgst": false, 01:03:30.829 "ddgst": false, 01:03:30.829 "allow_unrecognized_csi": false 01:03:30.830 } 01:03:30.830 } 01:03:30.830 Got JSON-RPC error response 01:03:30.830 GoRPCClient: error on JSON-RPC call 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.830 2024/12/09 06:02:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 01:03:30.830 request: 01:03:30.830 { 01:03:30.830 "method": "bdev_nvme_attach_controller", 01:03:30.830 "params": { 01:03:30.830 "name": "NVMe0", 01:03:30.830 "trtype": "tcp", 01:03:30.830 "traddr": "10.0.0.3", 01:03:30.830 "adrfam": "ipv4", 01:03:30.830 "trsvcid": "4420", 01:03:30.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:30.830 "hostaddr": "10.0.0.1", 01:03:30.830 "prchk_reftag": false, 01:03:30.830 "prchk_guard": false, 01:03:30.830 "hdgst": false, 01:03:30.830 "ddgst": false, 01:03:30.830 "multipath": "disable", 01:03:30.830 "allow_unrecognized_csi": false 01:03:30.830 } 01:03:30.830 } 01:03:30.830 Got JSON-RPC error response 01:03:30.830 GoRPCClient: error on JSON-RPC call 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.830 2024/12/09 06:02:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 01:03:30.830 request: 01:03:30.830 { 01:03:30.830 "method": "bdev_nvme_attach_controller", 01:03:30.830 "params": { 01:03:30.830 "name": "NVMe0", 01:03:30.830 "trtype": "tcp", 01:03:30.830 "traddr": "10.0.0.3", 01:03:30.830 "adrfam": "ipv4", 01:03:30.830 "trsvcid": "4420", 01:03:30.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:03:30.830 "hostaddr": "10.0.0.1", 01:03:30.830 "prchk_reftag": false, 01:03:30.830 "prchk_guard": false, 01:03:30.830 "hdgst": false, 01:03:30.830 "ddgst": false, 01:03:30.830 "multipath": "failover", 01:03:30.830 "allow_unrecognized_csi": false 01:03:30.830 } 01:03:30.830 } 01:03:30.830 Got JSON-RPC error response 01:03:30.830 GoRPCClient: error on JSON-RPC call 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.830 NVMe0n1 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:30.830 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:31.089 01:03:31.089 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.089 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:03:31.089 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:31.089 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:31.089 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 01:03:31.089 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:31.089 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 01:03:31.089 06:02:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:03:32.024 { 01:03:32.024 "results": [ 01:03:32.024 { 01:03:32.024 "job": "NVMe0n1", 01:03:32.024 "core_mask": "0x1", 01:03:32.024 "workload": "write", 01:03:32.024 "status": "finished", 01:03:32.024 "queue_depth": 128, 01:03:32.024 "io_size": 4096, 01:03:32.024 "runtime": 1.00777, 01:03:32.024 "iops": 21268.741875626383, 01:03:32.024 "mibps": 83.08102295166556, 01:03:32.024 "io_failed": 0, 01:03:32.024 "io_timeout": 0, 01:03:32.024 "avg_latency_us": 6006.940162019561, 01:03:32.024 "min_latency_us": 1966.08, 01:03:32.024 "max_latency_us": 10128.290909090909 01:03:32.024 } 01:03:32.024 ], 01:03:32.024 "core_count": 1 01:03:32.024 } 01:03:32.024 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 01:03:32.024 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.024 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:32.282 nvme1n1 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 01:03:32.282 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:32.283 nvme1n1 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:32.283 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.541 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 01:03:32.541 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 85517 01:03:32.541 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 85517 ']' 01:03:32.541 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 85517 01:03:32.541 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 01:03:32.541 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:32.542 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85517 01:03:32.542 killing process with pid 85517 01:03:32.542 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:32.542 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:32.542 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85517' 01:03:32.542 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 85517 01:03:32.542 06:02:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 85517 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 01:03:32.542 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 01:03:32.542 [2024-12-09 06:02:24.862305] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:32.542 [2024-12-09 06:02:24.862400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85517 ] 01:03:32.542 [2024-12-09 06:02:25.014708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:32.542 [2024-12-09 06:02:25.052813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:32.542 [2024-12-09 06:02:25.451554] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 0a6be083-59a7-4ea4-b686-7c774a8e1e13 already exists 01:03:32.542 [2024-12-09 06:02:25.451599] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:0a6be083-59a7-4ea4-b686-7c774a8e1e13 alias for bdev NVMe1n1 01:03:32.542 [2024-12-09 06:02:25.451633] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 01:03:32.542 Running I/O for 1 seconds... 01:03:32.542 21213.00 IOPS, 82.86 MiB/s 01:03:32.542 Latency(us) 01:03:32.542 [2024-12-09T06:02:27.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:32.542 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 01:03:32.542 NVMe0n1 : 1.01 21268.74 83.08 0.00 0.00 6006.94 1966.08 10128.29 01:03:32.542 [2024-12-09T06:02:27.128Z] =================================================================================================================== 01:03:32.542 [2024-12-09T06:02:27.128Z] Total : 21268.74 83.08 0.00 0.00 6006.94 1966.08 10128.29 01:03:32.542 Received shutdown signal, test time was about 1.000000 seconds 01:03:32.542 01:03:32.542 Latency(us) 01:03:32.542 [2024-12-09T06:02:27.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:03:32.542 [2024-12-09T06:02:27.128Z] =================================================================================================================== 01:03:32.542 [2024-12-09T06:02:27.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:03:32.542 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:32.542 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:32.801 rmmod nvme_tcp 01:03:32.801 rmmod nvme_fabrics 01:03:32.801 rmmod nvme_keyring 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 85483 ']' 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 85483 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 85483 ']' 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 85483 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85483 01:03:32.801 killing process with pid 85483 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85483' 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 85483 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 85483 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 01:03:32.801 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:33.060 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 01:03:33.060 01:03:33.060 real 0m4.035s 01:03:33.060 user 0m11.234s 01:03:33.060 sys 0m1.119s 01:03:33.318 ************************************ 01:03:33.318 END TEST nvmf_multicontroller 01:03:33.318 ************************************ 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:03:33.318 ************************************ 01:03:33.318 START TEST nvmf_aer 01:03:33.318 ************************************ 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 01:03:33.318 * Looking for test storage... 01:03:33.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 01:03:33.318 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:33.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:33.319 --rc genhtml_branch_coverage=1 01:03:33.319 --rc genhtml_function_coverage=1 01:03:33.319 --rc genhtml_legend=1 01:03:33.319 --rc geninfo_all_blocks=1 01:03:33.319 --rc geninfo_unexecuted_blocks=1 01:03:33.319 01:03:33.319 ' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:33.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:33.319 --rc genhtml_branch_coverage=1 01:03:33.319 --rc genhtml_function_coverage=1 01:03:33.319 --rc genhtml_legend=1 01:03:33.319 --rc geninfo_all_blocks=1 01:03:33.319 --rc geninfo_unexecuted_blocks=1 01:03:33.319 01:03:33.319 ' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:33.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:33.319 --rc genhtml_branch_coverage=1 01:03:33.319 --rc genhtml_function_coverage=1 01:03:33.319 --rc genhtml_legend=1 01:03:33.319 --rc geninfo_all_blocks=1 01:03:33.319 --rc geninfo_unexecuted_blocks=1 01:03:33.319 01:03:33.319 ' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:33.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:33.319 --rc genhtml_branch_coverage=1 01:03:33.319 --rc genhtml_function_coverage=1 01:03:33.319 --rc genhtml_legend=1 01:03:33.319 --rc geninfo_all_blocks=1 01:03:33.319 --rc geninfo_unexecuted_blocks=1 01:03:33.319 01:03:33.319 ' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:33.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:33.319 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:33.578 Cannot find device "nvmf_init_br" 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:33.578 Cannot find device "nvmf_init_br2" 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:33.578 Cannot find device "nvmf_tgt_br" 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:33.578 Cannot find device "nvmf_tgt_br2" 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:33.578 Cannot find device "nvmf_init_br" 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:33.578 Cannot find device "nvmf_init_br2" 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:33.578 Cannot find device "nvmf_tgt_br" 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:33.578 Cannot find device "nvmf_tgt_br2" 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 01:03:33.578 06:02:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:33.578 Cannot find device "nvmf_br" 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:33.578 Cannot find device "nvmf_init_if" 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:33.578 Cannot find device "nvmf_init_if2" 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:33.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:33.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:33.578 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:33.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:33.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 01:03:33.838 01:03:33.838 --- 10.0.0.3 ping statistics --- 01:03:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:33.838 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:33.838 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:33.838 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 01:03:33.838 01:03:33.838 --- 10.0.0.4 ping statistics --- 01:03:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:33.838 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:33.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:33.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:03:33.838 01:03:33.838 --- 10.0.0.1 ping statistics --- 01:03:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:33.838 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:33.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:33.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 01:03:33.838 01:03:33.838 --- 10.0.0.2 ping statistics --- 01:03:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:33.838 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=85789 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 85789 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 85789 ']' 01:03:33.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:33.838 06:02:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:33.838 [2024-12-09 06:02:28.381385] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:33.838 [2024-12-09 06:02:28.381750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:34.096 [2024-12-09 06:02:28.531418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:34.096 [2024-12-09 06:02:28.568607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:34.096 [2024-12-09 06:02:28.568952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:34.096 [2024-12-09 06:02:28.569121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:34.096 [2024-12-09 06:02:28.569306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:34.096 [2024-12-09 06:02:28.569354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:34.096 [2024-12-09 06:02:28.570344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:34.096 [2024-12-09 06:02:28.570525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:34.096 [2024-12-09 06:02:28.570525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:34.096 [2024-12-09 06:02:28.570437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.034 [2024-12-09 06:02:29.525860] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.034 Malloc0 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.034 [2024-12-09 06:02:29.586231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.034 [ 01:03:35.034 { 01:03:35.034 "allow_any_host": true, 01:03:35.034 "hosts": [], 01:03:35.034 "listen_addresses": [], 01:03:35.034 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:03:35.034 "subtype": "Discovery" 01:03:35.034 }, 01:03:35.034 { 01:03:35.034 "allow_any_host": true, 01:03:35.034 "hosts": [], 01:03:35.034 "listen_addresses": [ 01:03:35.034 { 01:03:35.034 "adrfam": "IPv4", 01:03:35.034 "traddr": "10.0.0.3", 01:03:35.034 "trsvcid": "4420", 01:03:35.034 "trtype": "TCP" 01:03:35.034 } 01:03:35.034 ], 01:03:35.034 "max_cntlid": 65519, 01:03:35.034 "max_namespaces": 2, 01:03:35.034 "min_cntlid": 1, 01:03:35.034 "model_number": "SPDK bdev Controller", 01:03:35.034 "namespaces": [ 01:03:35.034 { 01:03:35.034 "bdev_name": "Malloc0", 01:03:35.034 "name": "Malloc0", 01:03:35.034 "nguid": "9B6A8143C43A4D53B0E503F4928CA8E5", 01:03:35.034 "nsid": 1, 01:03:35.034 "uuid": "9b6a8143-c43a-4d53-b0e5-03f4928ca8e5" 01:03:35.034 } 01:03:35.034 ], 01:03:35.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:03:35.034 "serial_number": "SPDK00000000000001", 01:03:35.034 "subtype": "NVMe" 01:03:35.034 } 01:03:35.034 ] 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=85843 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 01:03:35.034 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:03:35.035 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 01:03:35.035 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 01:03:35.035 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 01:03:35.035 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.294 Malloc1 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.294 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.553 [ 01:03:35.553 { 01:03:35.553 "allow_any_host": true, 01:03:35.553 "hosts": [], 01:03:35.553 "listen_addresses": [], 01:03:35.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:03:35.553 Asynchronous Event Request test 01:03:35.553 Attaching to 10.0.0.3 01:03:35.553 Attached to 10.0.0.3 01:03:35.553 Registering asynchronous event callbacks... 01:03:35.553 Starting namespace attribute notice tests for all controllers... 01:03:35.553 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 01:03:35.553 aer_cb - Changed Namespace 01:03:35.553 Cleaning up... 01:03:35.553 "subtype": "Discovery" 01:03:35.553 }, 01:03:35.553 { 01:03:35.553 "allow_any_host": true, 01:03:35.553 "hosts": [], 01:03:35.553 "listen_addresses": [ 01:03:35.553 { 01:03:35.553 "adrfam": "IPv4", 01:03:35.553 "traddr": "10.0.0.3", 01:03:35.553 "trsvcid": "4420", 01:03:35.553 "trtype": "TCP" 01:03:35.553 } 01:03:35.553 ], 01:03:35.553 "max_cntlid": 65519, 01:03:35.553 "max_namespaces": 2, 01:03:35.553 "min_cntlid": 1, 01:03:35.553 "model_number": "SPDK bdev Controller", 01:03:35.553 "namespaces": [ 01:03:35.553 { 01:03:35.553 "bdev_name": "Malloc0", 01:03:35.553 "name": "Malloc0", 01:03:35.553 "nguid": "9B6A8143C43A4D53B0E503F4928CA8E5", 01:03:35.553 "nsid": 1, 01:03:35.553 "uuid": "9b6a8143-c43a-4d53-b0e5-03f4928ca8e5" 01:03:35.553 }, 01:03:35.553 { 01:03:35.553 "bdev_name": "Malloc1", 01:03:35.553 "name": "Malloc1", 01:03:35.553 "nguid": "895E5D7EEC344B078F158596CDC0894C", 01:03:35.553 "nsid": 2, 01:03:35.553 "uuid": "895e5d7e-ec34-4b07-8f15-8596cdc0894c" 01:03:35.553 } 01:03:35.553 ], 01:03:35.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:03:35.553 "serial_number": "SPDK00000000000001", 01:03:35.553 "subtype": "NVMe" 01:03:35.553 } 01:03:35.554 ] 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 85843 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:35.554 06:02:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:35.554 rmmod nvme_tcp 01:03:35.554 rmmod nvme_fabrics 01:03:35.554 rmmod nvme_keyring 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 85789 ']' 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 85789 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 85789 ']' 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 85789 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85789 01:03:35.554 killing process with pid 85789 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85789' 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 85789 01:03:35.554 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 85789 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:35.817 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:35.818 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:35.818 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:35.818 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:35.818 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:35.818 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 01:03:36.076 01:03:36.076 real 0m2.800s 01:03:36.076 user 0m7.157s 01:03:36.076 sys 0m0.731s 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 01:03:36.076 ************************************ 01:03:36.076 END TEST nvmf_aer 01:03:36.076 ************************************ 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:03:36.076 ************************************ 01:03:36.076 START TEST nvmf_async_init 01:03:36.076 ************************************ 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 01:03:36.076 * Looking for test storage... 01:03:36.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 01:03:36.076 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:36.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:36.335 --rc genhtml_branch_coverage=1 01:03:36.335 --rc genhtml_function_coverage=1 01:03:36.335 --rc genhtml_legend=1 01:03:36.335 --rc geninfo_all_blocks=1 01:03:36.335 --rc geninfo_unexecuted_blocks=1 01:03:36.335 01:03:36.335 ' 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:36.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:36.335 --rc genhtml_branch_coverage=1 01:03:36.335 --rc genhtml_function_coverage=1 01:03:36.335 --rc genhtml_legend=1 01:03:36.335 --rc geninfo_all_blocks=1 01:03:36.335 --rc geninfo_unexecuted_blocks=1 01:03:36.335 01:03:36.335 ' 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:36.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:36.335 --rc genhtml_branch_coverage=1 01:03:36.335 --rc genhtml_function_coverage=1 01:03:36.335 --rc genhtml_legend=1 01:03:36.335 --rc geninfo_all_blocks=1 01:03:36.335 --rc geninfo_unexecuted_blocks=1 01:03:36.335 01:03:36.335 ' 01:03:36.335 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:36.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:36.335 --rc genhtml_branch_coverage=1 01:03:36.335 --rc genhtml_function_coverage=1 01:03:36.335 --rc genhtml_legend=1 01:03:36.336 --rc geninfo_all_blocks=1 01:03:36.336 --rc geninfo_unexecuted_blocks=1 01:03:36.336 01:03:36.336 ' 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:36.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=06809f1c156745828aa824f17565220e 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:36.336 Cannot find device "nvmf_init_br" 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:36.336 Cannot find device "nvmf_init_br2" 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:36.336 Cannot find device "nvmf_tgt_br" 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:36.336 Cannot find device "nvmf_tgt_br2" 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:36.336 Cannot find device "nvmf_init_br" 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:36.336 Cannot find device "nvmf_init_br2" 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:36.336 Cannot find device "nvmf_tgt_br" 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:36.336 Cannot find device "nvmf_tgt_br2" 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 01:03:36.336 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:36.337 Cannot find device "nvmf_br" 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:36.337 Cannot find device "nvmf_init_if" 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:36.337 Cannot find device "nvmf_init_if2" 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:36.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:36.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:36.337 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:36.595 06:02:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:36.595 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:36.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:36.596 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 01:03:36.596 01:03:36.596 --- 10.0.0.3 ping statistics --- 01:03:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:36.596 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:36.596 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:36.596 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 01:03:36.596 01:03:36.596 --- 10.0.0.4 ping statistics --- 01:03:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:36.596 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:36.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:36.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 01:03:36.596 01:03:36.596 --- 10.0.0.1 ping statistics --- 01:03:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:36.596 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:36.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:36.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 01:03:36.596 01:03:36.596 --- 10.0.0.2 ping statistics --- 01:03:36.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:36.596 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=86071 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 86071 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 86071 ']' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:36.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:36.596 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:36.856 [2024-12-09 06:02:31.198778] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:36.856 [2024-12-09 06:02:31.199571] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:36.856 [2024-12-09 06:02:31.353723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:03:36.856 [2024-12-09 06:02:31.391747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:36.856 [2024-12-09 06:02:31.391818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:36.856 [2024-12-09 06:02:31.391832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:36.856 [2024-12-09 06:02:31.391841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:36.856 [2024-12-09 06:02:31.391850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:36.856 [2024-12-09 06:02:31.392238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.115 [2024-12-09 06:02:31.538489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:37.115 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.116 null0 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 06809f1c156745828aa824f17565220e 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.116 [2024-12-09 06:02:31.578676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.116 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.374 nvme0n1 01:03:37.374 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.374 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:03:37.374 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.374 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.374 [ 01:03:37.374 { 01:03:37.374 "aliases": [ 01:03:37.374 "06809f1c-1567-4582-8aa8-24f17565220e" 01:03:37.374 ], 01:03:37.374 "assigned_rate_limits": { 01:03:37.374 "r_mbytes_per_sec": 0, 01:03:37.374 "rw_ios_per_sec": 0, 01:03:37.374 "rw_mbytes_per_sec": 0, 01:03:37.374 "w_mbytes_per_sec": 0 01:03:37.374 }, 01:03:37.374 "block_size": 512, 01:03:37.374 "claimed": false, 01:03:37.374 "driver_specific": { 01:03:37.374 "mp_policy": "active_passive", 01:03:37.374 "nvme": [ 01:03:37.374 { 01:03:37.374 "ctrlr_data": { 01:03:37.374 "ana_reporting": false, 01:03:37.374 "cntlid": 1, 01:03:37.374 "firmware_revision": "25.01", 01:03:37.374 "model_number": "SPDK bdev Controller", 01:03:37.374 "multi_ctrlr": true, 01:03:37.374 "oacs": { 01:03:37.374 "firmware": 0, 01:03:37.374 "format": 0, 01:03:37.374 "ns_manage": 0, 01:03:37.374 "security": 0 01:03:37.374 }, 01:03:37.374 "serial_number": "00000000000000000000", 01:03:37.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:37.374 "vendor_id": "0x8086" 01:03:37.374 }, 01:03:37.374 "ns_data": { 01:03:37.374 "can_share": true, 01:03:37.374 "id": 1 01:03:37.374 }, 01:03:37.374 "trid": { 01:03:37.374 "adrfam": "IPv4", 01:03:37.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:37.374 "traddr": "10.0.0.3", 01:03:37.374 "trsvcid": "4420", 01:03:37.374 "trtype": "TCP" 01:03:37.374 }, 01:03:37.374 "vs": { 01:03:37.374 "nvme_version": "1.3" 01:03:37.374 } 01:03:37.374 } 01:03:37.374 ] 01:03:37.374 }, 01:03:37.374 "memory_domains": [ 01:03:37.374 { 01:03:37.374 "dma_device_id": "system", 01:03:37.374 "dma_device_type": 1 01:03:37.374 } 01:03:37.374 ], 01:03:37.374 "name": "nvme0n1", 01:03:37.374 "num_blocks": 2097152, 01:03:37.374 "numa_id": -1, 01:03:37.374 "product_name": "NVMe disk", 01:03:37.374 "supported_io_types": { 01:03:37.374 "abort": true, 01:03:37.374 "compare": true, 01:03:37.374 "compare_and_write": true, 01:03:37.374 "copy": true, 01:03:37.374 "flush": true, 01:03:37.374 "get_zone_info": false, 01:03:37.374 "nvme_admin": true, 01:03:37.374 "nvme_io": true, 01:03:37.374 "nvme_io_md": false, 01:03:37.374 "nvme_iov_md": false, 01:03:37.374 "read": true, 01:03:37.374 "reset": true, 01:03:37.374 "seek_data": false, 01:03:37.374 "seek_hole": false, 01:03:37.374 "unmap": false, 01:03:37.374 "write": true, 01:03:37.374 "write_zeroes": true, 01:03:37.374 "zcopy": false, 01:03:37.374 "zone_append": false, 01:03:37.374 "zone_management": false 01:03:37.374 }, 01:03:37.374 "uuid": "06809f1c-1567-4582-8aa8-24f17565220e", 01:03:37.374 "zoned": false 01:03:37.374 } 01:03:37.374 ] 01:03:37.374 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.374 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 01:03:37.374 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.374 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.375 [2024-12-09 06:02:31.847349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:03:37.375 [2024-12-09 06:02:31.847482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d6360 (9): Bad file descriptor 01:03:37.634 [2024-12-09 06:02:31.979902] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 01:03:37.634 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.634 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:03:37.634 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.634 06:02:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.634 [ 01:03:37.634 { 01:03:37.634 "aliases": [ 01:03:37.634 "06809f1c-1567-4582-8aa8-24f17565220e" 01:03:37.634 ], 01:03:37.634 "assigned_rate_limits": { 01:03:37.634 "r_mbytes_per_sec": 0, 01:03:37.634 "rw_ios_per_sec": 0, 01:03:37.634 "rw_mbytes_per_sec": 0, 01:03:37.634 "w_mbytes_per_sec": 0 01:03:37.634 }, 01:03:37.634 "block_size": 512, 01:03:37.634 "claimed": false, 01:03:37.634 "driver_specific": { 01:03:37.634 "mp_policy": "active_passive", 01:03:37.634 "nvme": [ 01:03:37.634 { 01:03:37.634 "ctrlr_data": { 01:03:37.634 "ana_reporting": false, 01:03:37.634 "cntlid": 2, 01:03:37.634 "firmware_revision": "25.01", 01:03:37.634 "model_number": "SPDK bdev Controller", 01:03:37.634 "multi_ctrlr": true, 01:03:37.634 "oacs": { 01:03:37.634 "firmware": 0, 01:03:37.634 "format": 0, 01:03:37.634 "ns_manage": 0, 01:03:37.634 "security": 0 01:03:37.634 }, 01:03:37.634 "serial_number": "00000000000000000000", 01:03:37.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:37.634 "vendor_id": "0x8086" 01:03:37.634 }, 01:03:37.634 "ns_data": { 01:03:37.634 "can_share": true, 01:03:37.634 "id": 1 01:03:37.634 }, 01:03:37.634 "trid": { 01:03:37.634 "adrfam": "IPv4", 01:03:37.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:37.634 "traddr": "10.0.0.3", 01:03:37.634 "trsvcid": "4420", 01:03:37.634 "trtype": "TCP" 01:03:37.634 }, 01:03:37.634 "vs": { 01:03:37.634 "nvme_version": "1.3" 01:03:37.634 } 01:03:37.634 } 01:03:37.634 ] 01:03:37.634 }, 01:03:37.634 "memory_domains": [ 01:03:37.634 { 01:03:37.634 "dma_device_id": "system", 01:03:37.634 "dma_device_type": 1 01:03:37.634 } 01:03:37.634 ], 01:03:37.634 "name": "nvme0n1", 01:03:37.634 "num_blocks": 2097152, 01:03:37.634 "numa_id": -1, 01:03:37.634 "product_name": "NVMe disk", 01:03:37.634 "supported_io_types": { 01:03:37.634 "abort": true, 01:03:37.634 "compare": true, 01:03:37.634 "compare_and_write": true, 01:03:37.634 "copy": true, 01:03:37.634 "flush": true, 01:03:37.634 "get_zone_info": false, 01:03:37.634 "nvme_admin": true, 01:03:37.634 "nvme_io": true, 01:03:37.634 "nvme_io_md": false, 01:03:37.634 "nvme_iov_md": false, 01:03:37.634 "read": true, 01:03:37.634 "reset": true, 01:03:37.634 "seek_data": false, 01:03:37.634 "seek_hole": false, 01:03:37.634 "unmap": false, 01:03:37.634 "write": true, 01:03:37.634 "write_zeroes": true, 01:03:37.634 "zcopy": false, 01:03:37.634 "zone_append": false, 01:03:37.634 "zone_management": false 01:03:37.634 }, 01:03:37.634 "uuid": "06809f1c-1567-4582-8aa8-24f17565220e", 01:03:37.634 "zoned": false 01:03:37.634 } 01:03:37.634 ] 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jvNLqlkr4e 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jvNLqlkr4e 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.jvNLqlkr4e 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.634 [2024-12-09 06:02:32.063562] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:03:37.634 [2024-12-09 06:02:32.063762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.634 [2024-12-09 06:02:32.079571] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:03:37.634 nvme0n1 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.634 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.634 [ 01:03:37.634 { 01:03:37.634 "aliases": [ 01:03:37.634 "06809f1c-1567-4582-8aa8-24f17565220e" 01:03:37.634 ], 01:03:37.634 "assigned_rate_limits": { 01:03:37.634 "r_mbytes_per_sec": 0, 01:03:37.634 "rw_ios_per_sec": 0, 01:03:37.634 "rw_mbytes_per_sec": 0, 01:03:37.634 "w_mbytes_per_sec": 0 01:03:37.634 }, 01:03:37.634 "block_size": 512, 01:03:37.634 "claimed": false, 01:03:37.634 "driver_specific": { 01:03:37.634 "mp_policy": "active_passive", 01:03:37.634 "nvme": [ 01:03:37.634 { 01:03:37.634 "ctrlr_data": { 01:03:37.634 "ana_reporting": false, 01:03:37.634 "cntlid": 3, 01:03:37.634 "firmware_revision": "25.01", 01:03:37.634 "model_number": "SPDK bdev Controller", 01:03:37.634 "multi_ctrlr": true, 01:03:37.634 "oacs": { 01:03:37.634 "firmware": 0, 01:03:37.634 "format": 0, 01:03:37.635 "ns_manage": 0, 01:03:37.635 "security": 0 01:03:37.635 }, 01:03:37.635 "serial_number": "00000000000000000000", 01:03:37.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:37.635 "vendor_id": "0x8086" 01:03:37.635 }, 01:03:37.635 "ns_data": { 01:03:37.635 "can_share": true, 01:03:37.635 "id": 1 01:03:37.635 }, 01:03:37.635 "trid": { 01:03:37.635 "adrfam": "IPv4", 01:03:37.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:03:37.635 "traddr": "10.0.0.3", 01:03:37.635 "trsvcid": "4421", 01:03:37.635 "trtype": "TCP" 01:03:37.635 }, 01:03:37.635 "vs": { 01:03:37.635 "nvme_version": "1.3" 01:03:37.635 } 01:03:37.635 } 01:03:37.635 ] 01:03:37.635 }, 01:03:37.635 "memory_domains": [ 01:03:37.635 { 01:03:37.635 "dma_device_id": "system", 01:03:37.635 "dma_device_type": 1 01:03:37.635 } 01:03:37.635 ], 01:03:37.635 "name": "nvme0n1", 01:03:37.635 "num_blocks": 2097152, 01:03:37.635 "numa_id": -1, 01:03:37.635 "product_name": "NVMe disk", 01:03:37.635 "supported_io_types": { 01:03:37.635 "abort": true, 01:03:37.635 "compare": true, 01:03:37.635 "compare_and_write": true, 01:03:37.635 "copy": true, 01:03:37.635 "flush": true, 01:03:37.635 "get_zone_info": false, 01:03:37.635 "nvme_admin": true, 01:03:37.635 "nvme_io": true, 01:03:37.635 "nvme_io_md": false, 01:03:37.635 "nvme_iov_md": false, 01:03:37.635 "read": true, 01:03:37.635 "reset": true, 01:03:37.635 "seek_data": false, 01:03:37.635 "seek_hole": false, 01:03:37.635 "unmap": false, 01:03:37.635 "write": true, 01:03:37.635 "write_zeroes": true, 01:03:37.635 "zcopy": false, 01:03:37.635 "zone_append": false, 01:03:37.635 "zone_management": false 01:03:37.635 }, 01:03:37.635 "uuid": "06809f1c-1567-4582-8aa8-24f17565220e", 01:03:37.635 "zoned": false 01:03:37.635 } 01:03:37.635 ] 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.jvNLqlkr4e 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:37.635 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:37.894 rmmod nvme_tcp 01:03:37.894 rmmod nvme_fabrics 01:03:37.894 rmmod nvme_keyring 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 86071 ']' 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 86071 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 86071 ']' 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 86071 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86071 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:37.894 killing process with pid 86071 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86071' 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 86071 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 86071 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:37.894 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:38.153 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 01:03:38.154 01:03:38.154 real 0m2.179s 01:03:38.154 user 0m1.692s 01:03:38.154 sys 0m0.644s 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:38.154 ************************************ 01:03:38.154 END TEST nvmf_async_init 01:03:38.154 ************************************ 01:03:38.154 06:02:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:03:38.413 ************************************ 01:03:38.413 START TEST dma 01:03:38.413 ************************************ 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 01:03:38.413 * Looking for test storage... 01:03:38.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:38.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:38.413 --rc genhtml_branch_coverage=1 01:03:38.413 --rc genhtml_function_coverage=1 01:03:38.413 --rc genhtml_legend=1 01:03:38.413 --rc geninfo_all_blocks=1 01:03:38.413 --rc geninfo_unexecuted_blocks=1 01:03:38.413 01:03:38.413 ' 01:03:38.413 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:38.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:38.414 --rc genhtml_branch_coverage=1 01:03:38.414 --rc genhtml_function_coverage=1 01:03:38.414 --rc genhtml_legend=1 01:03:38.414 --rc geninfo_all_blocks=1 01:03:38.414 --rc geninfo_unexecuted_blocks=1 01:03:38.414 01:03:38.414 ' 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:38.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:38.414 --rc genhtml_branch_coverage=1 01:03:38.414 --rc genhtml_function_coverage=1 01:03:38.414 --rc genhtml_legend=1 01:03:38.414 --rc geninfo_all_blocks=1 01:03:38.414 --rc geninfo_unexecuted_blocks=1 01:03:38.414 01:03:38.414 ' 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:38.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:38.414 --rc genhtml_branch_coverage=1 01:03:38.414 --rc genhtml_function_coverage=1 01:03:38.414 --rc genhtml_legend=1 01:03:38.414 --rc geninfo_all_blocks=1 01:03:38.414 --rc geninfo_unexecuted_blocks=1 01:03:38.414 01:03:38.414 ' 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:38.414 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:38.414 06:02:32 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:38.674 06:02:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 01:03:38.674 06:02:32 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 01:03:38.674 01:03:38.674 real 0m0.219s 01:03:38.674 user 0m0.130s 01:03:38.674 sys 0m0.102s 01:03:38.674 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:38.674 06:02:32 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 01:03:38.674 ************************************ 01:03:38.674 END TEST dma 01:03:38.674 ************************************ 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:03:38.674 ************************************ 01:03:38.674 START TEST nvmf_identify 01:03:38.674 ************************************ 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 01:03:38.674 * Looking for test storage... 01:03:38.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:38.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:38.674 --rc genhtml_branch_coverage=1 01:03:38.674 --rc genhtml_function_coverage=1 01:03:38.674 --rc genhtml_legend=1 01:03:38.674 --rc geninfo_all_blocks=1 01:03:38.674 --rc geninfo_unexecuted_blocks=1 01:03:38.674 01:03:38.674 ' 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:38.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:38.674 --rc genhtml_branch_coverage=1 01:03:38.674 --rc genhtml_function_coverage=1 01:03:38.674 --rc genhtml_legend=1 01:03:38.674 --rc geninfo_all_blocks=1 01:03:38.674 --rc geninfo_unexecuted_blocks=1 01:03:38.674 01:03:38.674 ' 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:38.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:38.674 --rc genhtml_branch_coverage=1 01:03:38.674 --rc genhtml_function_coverage=1 01:03:38.674 --rc genhtml_legend=1 01:03:38.674 --rc geninfo_all_blocks=1 01:03:38.674 --rc geninfo_unexecuted_blocks=1 01:03:38.674 01:03:38.674 ' 01:03:38.674 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:38.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:38.675 --rc genhtml_branch_coverage=1 01:03:38.675 --rc genhtml_function_coverage=1 01:03:38.675 --rc genhtml_legend=1 01:03:38.675 --rc geninfo_all_blocks=1 01:03:38.675 --rc geninfo_unexecuted_blocks=1 01:03:38.675 01:03:38.675 ' 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:38.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:38.675 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:38.936 Cannot find device "nvmf_init_br" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:38.936 Cannot find device "nvmf_init_br2" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:38.936 Cannot find device "nvmf_tgt_br" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:38.936 Cannot find device "nvmf_tgt_br2" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:38.936 Cannot find device "nvmf_init_br" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:38.936 Cannot find device "nvmf_init_br2" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:38.936 Cannot find device "nvmf_tgt_br" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:38.936 Cannot find device "nvmf_tgt_br2" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:38.936 Cannot find device "nvmf_br" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:38.936 Cannot find device "nvmf_init_if" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:38.936 Cannot find device "nvmf_init_if2" 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:38.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:38.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:38.936 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:39.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:39.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 01:03:39.227 01:03:39.227 --- 10.0.0.3 ping statistics --- 01:03:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:39.227 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:39.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:39.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 01:03:39.227 01:03:39.227 --- 10.0.0.4 ping statistics --- 01:03:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:39.227 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:39.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:39.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:03:39.227 01:03:39.227 --- 10.0.0.1 ping statistics --- 01:03:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:39.227 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:39.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:39.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 01:03:39.227 01:03:39.227 --- 10.0.0.2 ping statistics --- 01:03:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:39.227 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86389 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86389 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 86389 ']' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:39.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:39.227 06:02:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.227 [2024-12-09 06:02:33.733484] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:39.227 [2024-12-09 06:02:33.733578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:39.485 [2024-12-09 06:02:33.888339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:39.485 [2024-12-09 06:02:33.929231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:39.485 [2024-12-09 06:02:33.929297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:39.485 [2024-12-09 06:02:33.929320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:39.485 [2024-12-09 06:02:33.929330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:39.485 [2024-12-09 06:02:33.929339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:39.485 [2024-12-09 06:02:33.930246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:39.485 [2024-12-09 06:02:33.930758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:39.485 [2024-12-09 06:02:33.930850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:39.485 [2024-12-09 06:02:33.930857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.485 [2024-12-09 06:02:34.030210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:39.485 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.745 Malloc0 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.745 [2024-12-09 06:02:34.125423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:39.745 [ 01:03:39.745 { 01:03:39.745 "allow_any_host": true, 01:03:39.745 "hosts": [], 01:03:39.745 "listen_addresses": [ 01:03:39.745 { 01:03:39.745 "adrfam": "IPv4", 01:03:39.745 "traddr": "10.0.0.3", 01:03:39.745 "trsvcid": "4420", 01:03:39.745 "trtype": "TCP" 01:03:39.745 } 01:03:39.745 ], 01:03:39.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:03:39.745 "subtype": "Discovery" 01:03:39.745 }, 01:03:39.745 { 01:03:39.745 "allow_any_host": true, 01:03:39.745 "hosts": [], 01:03:39.745 "listen_addresses": [ 01:03:39.745 { 01:03:39.745 "adrfam": "IPv4", 01:03:39.745 "traddr": "10.0.0.3", 01:03:39.745 "trsvcid": "4420", 01:03:39.745 "trtype": "TCP" 01:03:39.745 } 01:03:39.745 ], 01:03:39.745 "max_cntlid": 65519, 01:03:39.745 "max_namespaces": 32, 01:03:39.745 "min_cntlid": 1, 01:03:39.745 "model_number": "SPDK bdev Controller", 01:03:39.745 "namespaces": [ 01:03:39.745 { 01:03:39.745 "bdev_name": "Malloc0", 01:03:39.745 "eui64": "ABCDEF0123456789", 01:03:39.745 "name": "Malloc0", 01:03:39.745 "nguid": "ABCDEF0123456789ABCDEF0123456789", 01:03:39.745 "nsid": 1, 01:03:39.745 "uuid": "8efac89d-951f-46e5-9f33-ab22b6f04a01" 01:03:39.745 } 01:03:39.745 ], 01:03:39.745 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:03:39.745 "serial_number": "SPDK00000000000001", 01:03:39.745 "subtype": "NVMe" 01:03:39.745 } 01:03:39.745 ] 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:39.745 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 01:03:39.745 [2024-12-09 06:02:34.181684] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:39.745 [2024-12-09 06:02:34.181754] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86429 ] 01:03:40.007 [2024-12-09 06:02:34.343853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 01:03:40.007 [2024-12-09 06:02:34.343932] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:03:40.007 [2024-12-09 06:02:34.343940] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:03:40.007 [2024-12-09 06:02:34.343956] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:03:40.007 [2024-12-09 06:02:34.343968] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:03:40.007 [2024-12-09 06:02:34.344315] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 01:03:40.007 [2024-12-09 06:02:34.344380] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19efd90 0 01:03:40.007 [2024-12-09 06:02:34.351725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:03:40.007 [2024-12-09 06:02:34.351750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:03:40.007 [2024-12-09 06:02:34.351773] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:03:40.007 [2024-12-09 06:02:34.351793] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:03:40.007 [2024-12-09 06:02:34.351829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.007 [2024-12-09 06:02:34.351837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.007 [2024-12-09 06:02:34.351843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.008 [2024-12-09 06:02:34.351858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:03:40.008 [2024-12-09 06:02:34.351893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.008 [2024-12-09 06:02:34.359707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.008 [2024-12-09 06:02:34.359728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.008 [2024-12-09 06:02:34.359750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.359756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.008 [2024-12-09 06:02:34.359787] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:03:40.008 [2024-12-09 06:02:34.359796] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 01:03:40.008 [2024-12-09 06:02:34.359804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 01:03:40.008 [2024-12-09 06:02:34.359823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.359829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.359835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.008 [2024-12-09 06:02:34.359846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.008 [2024-12-09 06:02:34.359877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.008 [2024-12-09 06:02:34.359976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.008 [2024-12-09 06:02:34.359985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.008 [2024-12-09 06:02:34.359989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.359994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.008 [2024-12-09 06:02:34.360011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 01:03:40.008 [2024-12-09 06:02:34.360020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 01:03:40.008 [2024-12-09 06:02:34.360029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.008 [2024-12-09 06:02:34.360047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.008 [2024-12-09 06:02:34.360069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.008 [2024-12-09 06:02:34.360145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.008 [2024-12-09 06:02:34.360153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.008 [2024-12-09 06:02:34.360157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.008 [2024-12-09 06:02:34.360168] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 01:03:40.008 [2024-12-09 06:02:34.360192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 01:03:40.008 [2024-12-09 06:02:34.360216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.008 [2024-12-09 06:02:34.360232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.008 [2024-12-09 06:02:34.360251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.008 [2024-12-09 06:02:34.360317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.008 [2024-12-09 06:02:34.360324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.008 [2024-12-09 06:02:34.360328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.008 [2024-12-09 06:02:34.360338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:03:40.008 [2024-12-09 06:02:34.360349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.008 [2024-12-09 06:02:34.360365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.008 [2024-12-09 06:02:34.360383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.008 [2024-12-09 06:02:34.360446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.008 [2024-12-09 06:02:34.360454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.008 [2024-12-09 06:02:34.360457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.008 [2024-12-09 06:02:34.360467] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 01:03:40.008 [2024-12-09 06:02:34.360473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 01:03:40.008 [2024-12-09 06:02:34.360481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:03:40.008 [2024-12-09 06:02:34.360592] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 01:03:40.008 [2024-12-09 06:02:34.360598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:03:40.008 [2024-12-09 06:02:34.360608] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.008 [2024-12-09 06:02:34.360624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.008 [2024-12-09 06:02:34.360644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.008 [2024-12-09 06:02:34.360744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.008 [2024-12-09 06:02:34.360769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.008 [2024-12-09 06:02:34.360774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.008 [2024-12-09 06:02:34.360785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:03:40.008 [2024-12-09 06:02:34.360796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.008 [2024-12-09 06:02:34.360814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.008 [2024-12-09 06:02:34.360836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.008 [2024-12-09 06:02:34.360911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.008 [2024-12-09 06:02:34.360919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.008 [2024-12-09 06:02:34.360923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.008 [2024-12-09 06:02:34.360933] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:03:40.008 [2024-12-09 06:02:34.360939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 01:03:40.008 [2024-12-09 06:02:34.360948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 01:03:40.008 [2024-12-09 06:02:34.360959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 01:03:40.008 [2024-12-09 06:02:34.360971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.360977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.008 [2024-12-09 06:02:34.360985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.008 [2024-12-09 06:02:34.361006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.008 [2024-12-09 06:02:34.361155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.008 [2024-12-09 06:02:34.361162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.008 [2024-12-09 06:02:34.361166] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.361171] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19efd90): datao=0, datal=4096, cccid=0 01:03:40.008 [2024-12-09 06:02:34.361176] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30600) on tqpair(0x19efd90): expected_datao=0, payload_size=4096 01:03:40.008 [2024-12-09 06:02:34.361181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.361189] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.361194] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.361203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.008 [2024-12-09 06:02:34.361209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.008 [2024-12-09 06:02:34.361213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.008 [2024-12-09 06:02:34.361218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.008 [2024-12-09 06:02:34.361227] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 01:03:40.008 [2024-12-09 06:02:34.361233] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 01:03:40.009 [2024-12-09 06:02:34.361238] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 01:03:40.009 [2024-12-09 06:02:34.361244] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 01:03:40.009 [2024-12-09 06:02:34.361249] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 01:03:40.009 [2024-12-09 06:02:34.361254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 01:03:40.009 [2024-12-09 06:02:34.361263] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 01:03:40.009 [2024-12-09 06:02:34.361271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.361288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:03:40.009 [2024-12-09 06:02:34.361308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.009 [2024-12-09 06:02:34.361383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.009 [2024-12-09 06:02:34.361390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.009 [2024-12-09 06:02:34.361394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.009 [2024-12-09 06:02:34.361413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.361430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.009 [2024-12-09 06:02:34.361437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.361452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.009 [2024-12-09 06:02:34.361458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.361473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.009 [2024-12-09 06:02:34.361479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.361494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.009 [2024-12-09 06:02:34.361499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 01:03:40.009 [2024-12-09 06:02:34.361509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:03:40.009 [2024-12-09 06:02:34.361516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.361528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.009 [2024-12-09 06:02:34.361550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30600, cid 0, qid 0 01:03:40.009 [2024-12-09 06:02:34.361557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30780, cid 1, qid 0 01:03:40.009 [2024-12-09 06:02:34.361563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30900, cid 2, qid 0 01:03:40.009 [2024-12-09 06:02:34.361568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.009 [2024-12-09 06:02:34.361573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30c00, cid 4, qid 0 01:03:40.009 [2024-12-09 06:02:34.361701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.009 [2024-12-09 06:02:34.361722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.009 [2024-12-09 06:02:34.361727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30c00) on tqpair=0x19efd90 01:03:40.009 [2024-12-09 06:02:34.361738] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 01:03:40.009 [2024-12-09 06:02:34.361764] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 01:03:40.009 [2024-12-09 06:02:34.361779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.361793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.009 [2024-12-09 06:02:34.361817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30c00, cid 4, qid 0 01:03:40.009 [2024-12-09 06:02:34.361908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.009 [2024-12-09 06:02:34.361916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.009 [2024-12-09 06:02:34.361920] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361925] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19efd90): datao=0, datal=4096, cccid=4 01:03:40.009 [2024-12-09 06:02:34.361930] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30c00) on tqpair(0x19efd90): expected_datao=0, payload_size=4096 01:03:40.009 [2024-12-09 06:02:34.361935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361943] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361948] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.009 [2024-12-09 06:02:34.361964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.009 [2024-12-09 06:02:34.361968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.361972] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30c00) on tqpair=0x19efd90 01:03:40.009 [2024-12-09 06:02:34.361987] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 01:03:40.009 [2024-12-09 06:02:34.362017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.362024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.362032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.009 [2024-12-09 06:02:34.362041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.362045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.362050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.362056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.009 [2024-12-09 06:02:34.362084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30c00, cid 4, qid 0 01:03:40.009 [2024-12-09 06:02:34.362093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30d80, cid 5, qid 0 01:03:40.009 [2024-12-09 06:02:34.362260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.009 [2024-12-09 06:02:34.362268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.009 [2024-12-09 06:02:34.362271] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.362275] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19efd90): datao=0, datal=1024, cccid=4 01:03:40.009 [2024-12-09 06:02:34.362280] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30c00) on tqpair(0x19efd90): expected_datao=0, payload_size=1024 01:03:40.009 [2024-12-09 06:02:34.362285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.362292] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.362296] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.362302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.009 [2024-12-09 06:02:34.362308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.009 [2024-12-09 06:02:34.362312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.362316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30d80) on tqpair=0x19efd90 01:03:40.009 [2024-12-09 06:02:34.402749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.009 [2024-12-09 06:02:34.402770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.009 [2024-12-09 06:02:34.402776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.402781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30c00) on tqpair=0x19efd90 01:03:40.009 [2024-12-09 06:02:34.402799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.402805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19efd90) 01:03:40.009 [2024-12-09 06:02:34.402815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.009 [2024-12-09 06:02:34.402850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30c00, cid 4, qid 0 01:03:40.009 [2024-12-09 06:02:34.402951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.009 [2024-12-09 06:02:34.402958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.009 [2024-12-09 06:02:34.402963] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.402981] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19efd90): datao=0, datal=3072, cccid=4 01:03:40.009 [2024-12-09 06:02:34.402987] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30c00) on tqpair(0x19efd90): expected_datao=0, payload_size=3072 01:03:40.009 [2024-12-09 06:02:34.402992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.403017] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.009 [2024-12-09 06:02:34.403021] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.010 [2024-12-09 06:02:34.403030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.010 [2024-12-09 06:02:34.403052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.010 [2024-12-09 06:02:34.403056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.010 [2024-12-09 06:02:34.403060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30c00) on tqpair=0x19efd90 01:03:40.010 [2024-12-09 06:02:34.403086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.010 [2024-12-09 06:02:34.403092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19efd90) 01:03:40.010 [2024-12-09 06:02:34.403099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.010 [2024-12-09 06:02:34.403126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30c00, cid 4, qid 0 01:03:40.010 [2024-12-09 06:02:34.403199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.010 [2024-12-09 06:02:34.403206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.010 [2024-12-09 06:02:34.403209] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.010 [2024-12-09 06:02:34.403213] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19efd90): datao=0, datal=8, cccid=4 01:03:40.010 [2024-12-09 06:02:34.403218] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30c00) on tqpair(0x19efd90): expected_datao=0, payload_size=8 01:03:40.010 [2024-12-09 06:02:34.403223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.010 [2024-12-09 06:02:34.403230] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.010 [2024-12-09 06:02:34.403234] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.010 ===================================================== 01:03:40.010 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 01:03:40.010 ===================================================== 01:03:40.010 Controller Capabilities/Features 01:03:40.010 ================================ 01:03:40.010 Vendor ID: 0000 01:03:40.010 Subsystem Vendor ID: 0000 01:03:40.010 Serial Number: .................... 01:03:40.010 Model Number: ........................................ 01:03:40.010 Firmware Version: 25.01 01:03:40.010 Recommended Arb Burst: 0 01:03:40.010 IEEE OUI Identifier: 00 00 00 01:03:40.010 Multi-path I/O 01:03:40.010 May have multiple subsystem ports: No 01:03:40.010 May have multiple controllers: No 01:03:40.010 Associated with SR-IOV VF: No 01:03:40.010 Max Data Transfer Size: 131072 01:03:40.010 Max Number of Namespaces: 0 01:03:40.010 Max Number of I/O Queues: 1024 01:03:40.010 NVMe Specification Version (VS): 1.3 01:03:40.010 NVMe Specification Version (Identify): 1.3 01:03:40.010 Maximum Queue Entries: 128 01:03:40.010 Contiguous Queues Required: Yes 01:03:40.010 Arbitration Mechanisms Supported 01:03:40.010 Weighted Round Robin: Not Supported 01:03:40.010 Vendor Specific: Not Supported 01:03:40.010 Reset Timeout: 15000 ms 01:03:40.010 Doorbell Stride: 4 bytes 01:03:40.010 NVM Subsystem Reset: Not Supported 01:03:40.010 Command Sets Supported 01:03:40.010 NVM Command Set: Supported 01:03:40.010 Boot Partition: Not Supported 01:03:40.010 Memory Page Size Minimum: 4096 bytes 01:03:40.010 Memory Page Size Maximum: 4096 bytes 01:03:40.010 Persistent Memory Region: Not Supported 01:03:40.010 Optional Asynchronous Events Supported 01:03:40.010 Namespace Attribute Notices: Not Supported 01:03:40.010 Firmware Activation Notices: Not Supported 01:03:40.010 ANA Change Notices: Not Supported 01:03:40.010 PLE Aggregate Log Change Notices: Not Supported 01:03:40.010 LBA Status Info Alert Notices: Not Supported 01:03:40.010 EGE Aggregate Log Change Notices: Not Supported 01:03:40.010 Normal NVM Subsystem Shutdown event: Not Supported 01:03:40.010 Zone Descriptor Change Notices: Not Supported 01:03:40.010 Discovery Log Change Notices: Supported 01:03:40.010 Controller Attributes 01:03:40.010 128-bit Host Identifier: Not Supported 01:03:40.010 Non-Operational Permissive Mode: Not Supported 01:03:40.010 NVM Sets: Not Supported 01:03:40.010 Read Recovery Levels: Not Supported 01:03:40.010 Endurance Groups: Not Supported 01:03:40.010 Predictable Latency Mode: Not Supported 01:03:40.010 Traffic Based Keep ALive: Not Supported 01:03:40.010 Namespace Granularity: Not Supported 01:03:40.010 SQ Associations: Not Supported 01:03:40.010 UUID List: Not Supported 01:03:40.010 Multi-Domain Subsystem: Not Supported 01:03:40.010 Fixed Capacity Management: Not Supported 01:03:40.010 Variable Capacity Management: Not Supported 01:03:40.010 Delete Endurance Group: Not Supported 01:03:40.010 Delete NVM Set: Not Supported 01:03:40.010 Extended LBA Formats Supported: Not Supported 01:03:40.010 Flexible Data Placement Supported: Not Supported 01:03:40.010 01:03:40.010 Controller Memory Buffer Support 01:03:40.010 ================================ 01:03:40.010 Supported: No 01:03:40.010 01:03:40.010 Persistent Memory Region Support 01:03:40.010 ================================ 01:03:40.010 Supported: No 01:03:40.010 01:03:40.010 Admin Command Set Attributes 01:03:40.010 ============================ 01:03:40.010 Security Send/Receive: Not Supported 01:03:40.010 Format NVM: Not Supported 01:03:40.010 Firmware Activate/Download: Not Supported 01:03:40.010 Namespace Management: Not Supported 01:03:40.010 Device Self-Test: Not Supported 01:03:40.010 Directives: Not Supported 01:03:40.010 NVMe-MI: Not Supported 01:03:40.010 Virtualization Management: Not Supported 01:03:40.010 Doorbell Buffer Config: Not Supported 01:03:40.010 Get LBA Status Capability: Not Supported 01:03:40.010 Command & Feature Lockdown Capability: Not Supported 01:03:40.010 Abort Command Limit: 1 01:03:40.010 Async Event Request Limit: 4 01:03:40.010 Number of Firmware Slots: N/A 01:03:40.010 Firmware Slot 1 Read-Only: N/A 01:03:40.010 Firm[2024-12-09 06:02:34.447766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.010 [2024-12-09 06:02:34.447787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.010 [2024-12-09 06:02:34.447809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.010 [2024-12-09 06:02:34.447814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30c00) on tqpair=0x19efd90 01:03:40.010 ware Activation Without Reset: N/A 01:03:40.010 Multiple Update Detection Support: N/A 01:03:40.010 Firmware Update Granularity: No Information Provided 01:03:40.010 Per-Namespace SMART Log: No 01:03:40.010 Asymmetric Namespace Access Log Page: Not Supported 01:03:40.010 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:03:40.010 Command Effects Log Page: Not Supported 01:03:40.010 Get Log Page Extended Data: Supported 01:03:40.010 Telemetry Log Pages: Not Supported 01:03:40.010 Persistent Event Log Pages: Not Supported 01:03:40.010 Supported Log Pages Log Page: May Support 01:03:40.010 Commands Supported & Effects Log Page: Not Supported 01:03:40.010 Feature Identifiers & Effects Log Page:May Support 01:03:40.010 NVMe-MI Commands & Effects Log Page: May Support 01:03:40.010 Data Area 4 for Telemetry Log: Not Supported 01:03:40.010 Error Log Page Entries Supported: 128 01:03:40.010 Keep Alive: Not Supported 01:03:40.010 01:03:40.010 NVM Command Set Attributes 01:03:40.010 ========================== 01:03:40.010 Submission Queue Entry Size 01:03:40.010 Max: 1 01:03:40.010 Min: 1 01:03:40.010 Completion Queue Entry Size 01:03:40.010 Max: 1 01:03:40.010 Min: 1 01:03:40.010 Number of Namespaces: 0 01:03:40.010 Compare Command: Not Supported 01:03:40.010 Write Uncorrectable Command: Not Supported 01:03:40.010 Dataset Management Command: Not Supported 01:03:40.010 Write Zeroes Command: Not Supported 01:03:40.010 Set Features Save Field: Not Supported 01:03:40.010 Reservations: Not Supported 01:03:40.010 Timestamp: Not Supported 01:03:40.010 Copy: Not Supported 01:03:40.010 Volatile Write Cache: Not Present 01:03:40.010 Atomic Write Unit (Normal): 1 01:03:40.010 Atomic Write Unit (PFail): 1 01:03:40.010 Atomic Compare & Write Unit: 1 01:03:40.010 Fused Compare & Write: Supported 01:03:40.010 Scatter-Gather List 01:03:40.010 SGL Command Set: Supported 01:03:40.010 SGL Keyed: Supported 01:03:40.010 SGL Bit Bucket Descriptor: Not Supported 01:03:40.010 SGL Metadata Pointer: Not Supported 01:03:40.010 Oversized SGL: Not Supported 01:03:40.010 SGL Metadata Address: Not Supported 01:03:40.010 SGL Offset: Supported 01:03:40.010 Transport SGL Data Block: Not Supported 01:03:40.010 Replay Protected Memory Block: Not Supported 01:03:40.010 01:03:40.010 Firmware Slot Information 01:03:40.010 ========================= 01:03:40.010 Active slot: 0 01:03:40.010 01:03:40.010 01:03:40.010 Error Log 01:03:40.010 ========= 01:03:40.010 01:03:40.010 Active Namespaces 01:03:40.010 ================= 01:03:40.010 Discovery Log Page 01:03:40.010 ================== 01:03:40.010 Generation Counter: 2 01:03:40.010 Number of Records: 2 01:03:40.010 Record Format: 0 01:03:40.010 01:03:40.010 Discovery Log Entry 0 01:03:40.010 ---------------------- 01:03:40.010 Transport Type: 3 (TCP) 01:03:40.010 Address Family: 1 (IPv4) 01:03:40.011 Subsystem Type: 3 (Current Discovery Subsystem) 01:03:40.011 Entry Flags: 01:03:40.011 Duplicate Returned Information: 1 01:03:40.011 Explicit Persistent Connection Support for Discovery: 1 01:03:40.011 Transport Requirements: 01:03:40.011 Secure Channel: Not Required 01:03:40.011 Port ID: 0 (0x0000) 01:03:40.011 Controller ID: 65535 (0xffff) 01:03:40.011 Admin Max SQ Size: 128 01:03:40.011 Transport Service Identifier: 4420 01:03:40.011 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:03:40.011 Transport Address: 10.0.0.3 01:03:40.011 Discovery Log Entry 1 01:03:40.011 ---------------------- 01:03:40.011 Transport Type: 3 (TCP) 01:03:40.011 Address Family: 1 (IPv4) 01:03:40.011 Subsystem Type: 2 (NVM Subsystem) 01:03:40.011 Entry Flags: 01:03:40.011 Duplicate Returned Information: 0 01:03:40.011 Explicit Persistent Connection Support for Discovery: 0 01:03:40.011 Transport Requirements: 01:03:40.011 Secure Channel: Not Required 01:03:40.011 Port ID: 0 (0x0000) 01:03:40.011 Controller ID: 65535 (0xffff) 01:03:40.011 Admin Max SQ Size: 128 01:03:40.011 Transport Service Identifier: 4420 01:03:40.011 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 01:03:40.011 Transport Address: 10.0.0.3 [2024-12-09 06:02:34.447908] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 01:03:40.011 [2024-12-09 06:02:34.447923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30600) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.447930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:40.011 [2024-12-09 06:02:34.447936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30780) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.447941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:40.011 [2024-12-09 06:02:34.447946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30900) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.447951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:40.011 [2024-12-09 06:02:34.447955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.447960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:40.011 [2024-12-09 06:02:34.447971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.447976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.447980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.011 [2024-12-09 06:02:34.447988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.011 [2024-12-09 06:02:34.448015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.011 [2024-12-09 06:02:34.448084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.011 [2024-12-09 06:02:34.448091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.011 [2024-12-09 06:02:34.448095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.448107] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.011 [2024-12-09 06:02:34.448140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.011 [2024-12-09 06:02:34.448164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.011 [2024-12-09 06:02:34.448236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.011 [2024-12-09 06:02:34.448244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.011 [2024-12-09 06:02:34.448248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.448262] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 01:03:40.011 [2024-12-09 06:02:34.448268] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 01:03:40.011 [2024-12-09 06:02:34.448279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.011 [2024-12-09 06:02:34.448296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.011 [2024-12-09 06:02:34.448316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.011 [2024-12-09 06:02:34.448367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.011 [2024-12-09 06:02:34.448379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.011 [2024-12-09 06:02:34.448384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.448400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.011 [2024-12-09 06:02:34.448417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.011 [2024-12-09 06:02:34.448436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.011 [2024-12-09 06:02:34.448490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.011 [2024-12-09 06:02:34.448497] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.011 [2024-12-09 06:02:34.448501] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.448516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.011 [2024-12-09 06:02:34.448532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.011 [2024-12-09 06:02:34.448550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.011 [2024-12-09 06:02:34.448601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.011 [2024-12-09 06:02:34.448608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.011 [2024-12-09 06:02:34.448612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.448627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.011 [2024-12-09 06:02:34.448643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.011 [2024-12-09 06:02:34.448677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.011 [2024-12-09 06:02:34.448743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.011 [2024-12-09 06:02:34.448750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.011 [2024-12-09 06:02:34.448754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.448769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448778] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.011 [2024-12-09 06:02:34.448786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.011 [2024-12-09 06:02:34.448805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.011 [2024-12-09 06:02:34.448854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.011 [2024-12-09 06:02:34.448861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.011 [2024-12-09 06:02:34.448865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.448880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.011 [2024-12-09 06:02:34.448896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.011 [2024-12-09 06:02:34.448914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.011 [2024-12-09 06:02:34.448964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.011 [2024-12-09 06:02:34.448971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.011 [2024-12-09 06:02:34.448975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.011 [2024-12-09 06:02:34.448990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.011 [2024-12-09 06:02:34.448999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.449101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.449214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.449324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.449437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.449550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.449673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.449789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.449900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.449909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.449917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.449934] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.449987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.449994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.449998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.450002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.450014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.450019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.450022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.450030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.012 [2024-12-09 06:02:34.450047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.012 [2024-12-09 06:02:34.450097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.012 [2024-12-09 06:02:34.450104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.012 [2024-12-09 06:02:34.450108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.450112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.012 [2024-12-09 06:02:34.450123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.450128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.012 [2024-12-09 06:02:34.450132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.012 [2024-12-09 06:02:34.450139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.450157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.450207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.450214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.450218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.450233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.450249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.450266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.450316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.450323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.450327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.450342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.450358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.450375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.450425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.450432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.450436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.450451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.450467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.450485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.450538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.450545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.450549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.450564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.450580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.450598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.450698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.450708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.450712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.450729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.450747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.450769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.450824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.450831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.450835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.450851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.450868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.450887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.450941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.450948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.450952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.450968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.450978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.450986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.451004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.451072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.451079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.451083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.451113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.451129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.451147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.451195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.451202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.451206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.451221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.451238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.451255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.451307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.451314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.451317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.451332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.451348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.451367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.451417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.451425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.451429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.451444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.013 [2024-12-09 06:02:34.451460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.013 [2024-12-09 06:02:34.451478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.013 [2024-12-09 06:02:34.451531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.013 [2024-12-09 06:02:34.451538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.013 [2024-12-09 06:02:34.451542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.013 [2024-12-09 06:02:34.451557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.013 [2024-12-09 06:02:34.451562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.014 [2024-12-09 06:02:34.451566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.014 [2024-12-09 06:02:34.451573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.014 [2024-12-09 06:02:34.451591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.014 [2024-12-09 06:02:34.451641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.014 [2024-12-09 06:02:34.451648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.014 [2024-12-09 06:02:34.451652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.014 [2024-12-09 06:02:34.451656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.014 [2024-12-09 06:02:34.451683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.014 [2024-12-09 06:02:34.451688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.014 [2024-12-09 06:02:34.455712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19efd90) 01:03:40.014 [2024-12-09 06:02:34.455726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.014 [2024-12-09 06:02:34.455756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30a80, cid 3, qid 0 01:03:40.014 [2024-12-09 06:02:34.455816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.014 [2024-12-09 06:02:34.455824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.014 [2024-12-09 06:02:34.455828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.014 [2024-12-09 06:02:34.455833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a30a80) on tqpair=0x19efd90 01:03:40.014 [2024-12-09 06:02:34.455842] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 01:03:40.014 01:03:40.014 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 01:03:40.014 [2024-12-09 06:02:34.497408] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:40.014 [2024-12-09 06:02:34.497464] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86431 ] 01:03:40.276 [2024-12-09 06:02:34.653719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 01:03:40.276 [2024-12-09 06:02:34.653797] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 01:03:40.276 [2024-12-09 06:02:34.653803] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 01:03:40.276 [2024-12-09 06:02:34.653819] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 01:03:40.276 [2024-12-09 06:02:34.653832] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 01:03:40.276 [2024-12-09 06:02:34.654232] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 01:03:40.276 [2024-12-09 06:02:34.654297] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd81d90 0 01:03:40.276 [2024-12-09 06:02:34.660268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 01:03:40.276 [2024-12-09 06:02:34.660293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 01:03:40.276 [2024-12-09 06:02:34.660315] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 01:03:40.276 [2024-12-09 06:02:34.660319] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 01:03:40.276 [2024-12-09 06:02:34.660351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.660358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.660362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.276 [2024-12-09 06:02:34.660376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 01:03:40.276 [2024-12-09 06:02:34.660409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.276 [2024-12-09 06:02:34.667694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.276 [2024-12-09 06:02:34.667715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.276 [2024-12-09 06:02:34.667736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.667741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.276 [2024-12-09 06:02:34.667751] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 01:03:40.276 [2024-12-09 06:02:34.667760] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 01:03:40.276 [2024-12-09 06:02:34.667767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 01:03:40.276 [2024-12-09 06:02:34.667784] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.667789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.667793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.276 [2024-12-09 06:02:34.667803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.276 [2024-12-09 06:02:34.667831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.276 [2024-12-09 06:02:34.667904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.276 [2024-12-09 06:02:34.667911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.276 [2024-12-09 06:02:34.667915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.667919] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.276 [2024-12-09 06:02:34.667925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 01:03:40.276 [2024-12-09 06:02:34.667933] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 01:03:40.276 [2024-12-09 06:02:34.667941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.667945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.667948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.276 [2024-12-09 06:02:34.667956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.276 [2024-12-09 06:02:34.667993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.276 [2024-12-09 06:02:34.668361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.276 [2024-12-09 06:02:34.668376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.276 [2024-12-09 06:02:34.668381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.668385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.276 [2024-12-09 06:02:34.668391] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 01:03:40.276 [2024-12-09 06:02:34.668401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 01:03:40.276 [2024-12-09 06:02:34.668409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.668413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.668417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.276 [2024-12-09 06:02:34.668425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.276 [2024-12-09 06:02:34.668446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.276 [2024-12-09 06:02:34.668505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.276 [2024-12-09 06:02:34.668512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.276 [2024-12-09 06:02:34.668515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.668519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.276 [2024-12-09 06:02:34.668525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 01:03:40.276 [2024-12-09 06:02:34.668535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.668540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.668544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.276 [2024-12-09 06:02:34.668551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.276 [2024-12-09 06:02:34.668569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.276 [2024-12-09 06:02:34.669061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.276 [2024-12-09 06:02:34.669076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.276 [2024-12-09 06:02:34.669081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.669085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.276 [2024-12-09 06:02:34.669091] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 01:03:40.276 [2024-12-09 06:02:34.669096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 01:03:40.276 [2024-12-09 06:02:34.669105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 01:03:40.276 [2024-12-09 06:02:34.669231] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 01:03:40.276 [2024-12-09 06:02:34.669238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 01:03:40.276 [2024-12-09 06:02:34.669247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.669252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.669256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.276 [2024-12-09 06:02:34.669263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.276 [2024-12-09 06:02:34.669288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.276 [2024-12-09 06:02:34.669360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.276 [2024-12-09 06:02:34.669367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.276 [2024-12-09 06:02:34.669371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.669375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.276 [2024-12-09 06:02:34.669380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 01:03:40.276 [2024-12-09 06:02:34.669391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.669396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.669400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.276 [2024-12-09 06:02:34.669407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.276 [2024-12-09 06:02:34.669426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.276 [2024-12-09 06:02:34.669854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.276 [2024-12-09 06:02:34.669870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.276 [2024-12-09 06:02:34.669875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.276 [2024-12-09 06:02:34.669879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.276 [2024-12-09 06:02:34.669884] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 01:03:40.276 [2024-12-09 06:02:34.669890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 01:03:40.276 [2024-12-09 06:02:34.669899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 01:03:40.277 [2024-12-09 06:02:34.669911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.669922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.669927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.669936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.277 [2024-12-09 06:02:34.669960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.277 [2024-12-09 06:02:34.670290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.277 [2024-12-09 06:02:34.670305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.277 [2024-12-09 06:02:34.670309] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.670314] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd81d90): datao=0, datal=4096, cccid=0 01:03:40.277 [2024-12-09 06:02:34.670319] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2600) on tqpair(0xd81d90): expected_datao=0, payload_size=4096 01:03:40.277 [2024-12-09 06:02:34.670324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.670332] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.670336] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.670393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.277 [2024-12-09 06:02:34.670399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.277 [2024-12-09 06:02:34.670403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.670407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.277 [2024-12-09 06:02:34.670416] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 01:03:40.277 [2024-12-09 06:02:34.670422] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 01:03:40.277 [2024-12-09 06:02:34.670427] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 01:03:40.277 [2024-12-09 06:02:34.670432] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 01:03:40.277 [2024-12-09 06:02:34.670437] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 01:03:40.277 [2024-12-09 06:02:34.670442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.670452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.670460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.670464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.670468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.670476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 01:03:40.277 [2024-12-09 06:02:34.670499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.277 [2024-12-09 06:02:34.671053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.277 [2024-12-09 06:02:34.671069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.277 [2024-12-09 06:02:34.671074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.277 [2024-12-09 06:02:34.671092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.671108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.277 [2024-12-09 06:02:34.671115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.671129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.277 [2024-12-09 06:02:34.671135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.671148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.277 [2024-12-09 06:02:34.671154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.671167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.277 [2024-12-09 06:02:34.671173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.671182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.671190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.671194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.671201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.277 [2024-12-09 06:02:34.671226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2600, cid 0, qid 0 01:03:40.277 [2024-12-09 06:02:34.671234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2780, cid 1, qid 0 01:03:40.277 [2024-12-09 06:02:34.671239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2900, cid 2, qid 0 01:03:40.277 [2024-12-09 06:02:34.671244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.277 [2024-12-09 06:02:34.671248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2c00, cid 4, qid 0 01:03:40.277 [2024-12-09 06:02:34.675748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.277 [2024-12-09 06:02:34.675768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.277 [2024-12-09 06:02:34.675789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.675794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2c00) on tqpair=0xd81d90 01:03:40.277 [2024-12-09 06:02:34.675801] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 01:03:40.277 [2024-12-09 06:02:34.675814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.675825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.675832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.675841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.675846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.675850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.675859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:03:40.277 [2024-12-09 06:02:34.675887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2c00, cid 4, qid 0 01:03:40.277 [2024-12-09 06:02:34.675954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.277 [2024-12-09 06:02:34.675962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.277 [2024-12-09 06:02:34.675966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.675970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2c00) on tqpair=0xd81d90 01:03:40.277 [2024-12-09 06:02:34.676041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.676056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 01:03:40.277 [2024-12-09 06:02:34.676080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.676084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd81d90) 01:03:40.277 [2024-12-09 06:02:34.676092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.277 [2024-12-09 06:02:34.676115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2c00, cid 4, qid 0 01:03:40.277 [2024-12-09 06:02:34.676461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.277 [2024-12-09 06:02:34.676478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.277 [2024-12-09 06:02:34.676482] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.676487] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd81d90): datao=0, datal=4096, cccid=4 01:03:40.277 [2024-12-09 06:02:34.676492] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2c00) on tqpair(0xd81d90): expected_datao=0, payload_size=4096 01:03:40.277 [2024-12-09 06:02:34.676497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.676505] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.676509] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.676573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.277 [2024-12-09 06:02:34.676580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.277 [2024-12-09 06:02:34.676584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.277 [2024-12-09 06:02:34.676588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2c00) on tqpair=0xd81d90 01:03:40.277 [2024-12-09 06:02:34.676606] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 01:03:40.277 [2024-12-09 06:02:34.676621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.676633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.676643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.676676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.676685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.676711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2c00, cid 4, qid 0 01:03:40.278 [2024-12-09 06:02:34.677053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.278 [2024-12-09 06:02:34.677069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.278 [2024-12-09 06:02:34.677074] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677078] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd81d90): datao=0, datal=4096, cccid=4 01:03:40.278 [2024-12-09 06:02:34.677083] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2c00) on tqpair(0xd81d90): expected_datao=0, payload_size=4096 01:03:40.278 [2024-12-09 06:02:34.677088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677096] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677100] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.278 [2024-12-09 06:02:34.677117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.278 [2024-12-09 06:02:34.677120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2c00) on tqpair=0xd81d90 01:03:40.278 [2024-12-09 06:02:34.677159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.677194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.677218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2c00, cid 4, qid 0 01:03:40.278 [2024-12-09 06:02:34.677502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.278 [2024-12-09 06:02:34.677517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.278 [2024-12-09 06:02:34.677522] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677526] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd81d90): datao=0, datal=4096, cccid=4 01:03:40.278 [2024-12-09 06:02:34.677531] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2c00) on tqpair(0xd81d90): expected_datao=0, payload_size=4096 01:03:40.278 [2024-12-09 06:02:34.677535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677543] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677547] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.278 [2024-12-09 06:02:34.677617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.278 [2024-12-09 06:02:34.677621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2c00) on tqpair=0xd81d90 01:03:40.278 [2024-12-09 06:02:34.677635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677695] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 01:03:40.278 [2024-12-09 06:02:34.677700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 01:03:40.278 [2024-12-09 06:02:34.677706] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 01:03:40.278 [2024-12-09 06:02:34.677725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.677739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.677747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.677756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.677762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 01:03:40.278 [2024-12-09 06:02:34.677794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2c00, cid 4, qid 0 01:03:40.278 [2024-12-09 06:02:34.677802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2d80, cid 5, qid 0 01:03:40.278 [2024-12-09 06:02:34.678171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.278 [2024-12-09 06:02:34.678187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.278 [2024-12-09 06:02:34.678191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.678196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2c00) on tqpair=0xd81d90 01:03:40.278 [2024-12-09 06:02:34.678204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.278 [2024-12-09 06:02:34.678210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.278 [2024-12-09 06:02:34.678214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.678218] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2d80) on tqpair=0xd81d90 01:03:40.278 [2024-12-09 06:02:34.678230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.678235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.678243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.678265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2d80, cid 5, qid 0 01:03:40.278 [2024-12-09 06:02:34.678332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.278 [2024-12-09 06:02:34.678339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.278 [2024-12-09 06:02:34.678343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.678347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2d80) on tqpair=0xd81d90 01:03:40.278 [2024-12-09 06:02:34.678358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.678363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.678370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.678404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2d80, cid 5, qid 0 01:03:40.278 [2024-12-09 06:02:34.678904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.278 [2024-12-09 06:02:34.678921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.278 [2024-12-09 06:02:34.678926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.678930] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2d80) on tqpair=0xd81d90 01:03:40.278 [2024-12-09 06:02:34.678943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.678948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.678956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.678979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2d80, cid 5, qid 0 01:03:40.278 [2024-12-09 06:02:34.679041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.278 [2024-12-09 06:02:34.679048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.278 [2024-12-09 06:02:34.679052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.679056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2d80) on tqpair=0xd81d90 01:03:40.278 [2024-12-09 06:02:34.679078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.679085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.679107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.679116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.679120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.679127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.679135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.679139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.679145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.679156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.278 [2024-12-09 06:02:34.679161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd81d90) 01:03:40.278 [2024-12-09 06:02:34.679168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.278 [2024-12-09 06:02:34.679190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2d80, cid 5, qid 0 01:03:40.278 [2024-12-09 06:02:34.679198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2c00, cid 4, qid 0 01:03:40.279 [2024-12-09 06:02:34.679203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2f00, cid 6, qid 0 01:03:40.279 [2024-12-09 06:02:34.679208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3080, cid 7, qid 0 01:03:40.279 [2024-12-09 06:02:34.683755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.279 [2024-12-09 06:02:34.683775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.279 [2024-12-09 06:02:34.683781] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683785] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd81d90): datao=0, datal=8192, cccid=5 01:03:40.279 [2024-12-09 06:02:34.683790] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2d80) on tqpair(0xd81d90): expected_datao=0, payload_size=8192 01:03:40.279 [2024-12-09 06:02:34.683795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683804] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683809] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.279 [2024-12-09 06:02:34.683821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.279 [2024-12-09 06:02:34.683825] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683829] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd81d90): datao=0, datal=512, cccid=4 01:03:40.279 [2024-12-09 06:02:34.683834] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2c00) on tqpair(0xd81d90): expected_datao=0, payload_size=512 01:03:40.279 [2024-12-09 06:02:34.683839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683846] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683850] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.279 [2024-12-09 06:02:34.683862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.279 [2024-12-09 06:02:34.683866] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683870] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd81d90): datao=0, datal=512, cccid=6 01:03:40.279 [2024-12-09 06:02:34.683875] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc2f00) on tqpair(0xd81d90): expected_datao=0, payload_size=512 01:03:40.279 [2024-12-09 06:02:34.683879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683886] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683890] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 01:03:40.279 [2024-12-09 06:02:34.683902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 01:03:40.279 [2024-12-09 06:02:34.683906] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683910] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd81d90): datao=0, datal=4096, cccid=7 01:03:40.279 [2024-12-09 06:02:34.683915] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc3080) on tqpair(0xd81d90): expected_datao=0, payload_size=4096 01:03:40.279 [2024-12-09 06:02:34.683919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683926] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683930] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.279 [2024-12-09 06:02:34.683942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.279 [2024-12-09 06:02:34.683946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2d80) on tqpair=0xd81d90 01:03:40.279 [2024-12-09 06:02:34.683970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.279 [2024-12-09 06:02:34.683978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.279 [2024-12-09 06:02:34.683982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.683986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2c00) on tqpair=0xd81d90 01:03:40.279 [2024-12-09 06:02:34.683999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.279 [2024-12-09 06:02:34.684006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.279 [2024-12-09 06:02:34.684010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.684014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2f00) on tqpair=0xd81d90 01:03:40.279 [2024-12-09 06:02:34.684022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.279 [2024-12-09 06:02:34.684029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.279 [2024-12-09 06:02:34.684033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.279 [2024-12-09 06:02:34.684037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3080) on tqpair=0xd81d90 01:03:40.279 ===================================================== 01:03:40.279 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:03:40.279 ===================================================== 01:03:40.279 Controller Capabilities/Features 01:03:40.279 ================================ 01:03:40.279 Vendor ID: 8086 01:03:40.279 Subsystem Vendor ID: 8086 01:03:40.279 Serial Number: SPDK00000000000001 01:03:40.279 Model Number: SPDK bdev Controller 01:03:40.279 Firmware Version: 25.01 01:03:40.279 Recommended Arb Burst: 6 01:03:40.279 IEEE OUI Identifier: e4 d2 5c 01:03:40.279 Multi-path I/O 01:03:40.279 May have multiple subsystem ports: Yes 01:03:40.279 May have multiple controllers: Yes 01:03:40.279 Associated with SR-IOV VF: No 01:03:40.279 Max Data Transfer Size: 131072 01:03:40.279 Max Number of Namespaces: 32 01:03:40.279 Max Number of I/O Queues: 127 01:03:40.279 NVMe Specification Version (VS): 1.3 01:03:40.279 NVMe Specification Version (Identify): 1.3 01:03:40.279 Maximum Queue Entries: 128 01:03:40.279 Contiguous Queues Required: Yes 01:03:40.279 Arbitration Mechanisms Supported 01:03:40.279 Weighted Round Robin: Not Supported 01:03:40.279 Vendor Specific: Not Supported 01:03:40.279 Reset Timeout: 15000 ms 01:03:40.279 Doorbell Stride: 4 bytes 01:03:40.279 NVM Subsystem Reset: Not Supported 01:03:40.279 Command Sets Supported 01:03:40.279 NVM Command Set: Supported 01:03:40.279 Boot Partition: Not Supported 01:03:40.279 Memory Page Size Minimum: 4096 bytes 01:03:40.279 Memory Page Size Maximum: 4096 bytes 01:03:40.279 Persistent Memory Region: Not Supported 01:03:40.279 Optional Asynchronous Events Supported 01:03:40.279 Namespace Attribute Notices: Supported 01:03:40.279 Firmware Activation Notices: Not Supported 01:03:40.279 ANA Change Notices: Not Supported 01:03:40.279 PLE Aggregate Log Change Notices: Not Supported 01:03:40.279 LBA Status Info Alert Notices: Not Supported 01:03:40.279 EGE Aggregate Log Change Notices: Not Supported 01:03:40.279 Normal NVM Subsystem Shutdown event: Not Supported 01:03:40.279 Zone Descriptor Change Notices: Not Supported 01:03:40.279 Discovery Log Change Notices: Not Supported 01:03:40.279 Controller Attributes 01:03:40.279 128-bit Host Identifier: Supported 01:03:40.279 Non-Operational Permissive Mode: Not Supported 01:03:40.279 NVM Sets: Not Supported 01:03:40.279 Read Recovery Levels: Not Supported 01:03:40.279 Endurance Groups: Not Supported 01:03:40.279 Predictable Latency Mode: Not Supported 01:03:40.279 Traffic Based Keep ALive: Not Supported 01:03:40.279 Namespace Granularity: Not Supported 01:03:40.279 SQ Associations: Not Supported 01:03:40.279 UUID List: Not Supported 01:03:40.279 Multi-Domain Subsystem: Not Supported 01:03:40.279 Fixed Capacity Management: Not Supported 01:03:40.279 Variable Capacity Management: Not Supported 01:03:40.279 Delete Endurance Group: Not Supported 01:03:40.279 Delete NVM Set: Not Supported 01:03:40.279 Extended LBA Formats Supported: Not Supported 01:03:40.279 Flexible Data Placement Supported: Not Supported 01:03:40.279 01:03:40.279 Controller Memory Buffer Support 01:03:40.279 ================================ 01:03:40.279 Supported: No 01:03:40.279 01:03:40.279 Persistent Memory Region Support 01:03:40.279 ================================ 01:03:40.279 Supported: No 01:03:40.279 01:03:40.279 Admin Command Set Attributes 01:03:40.279 ============================ 01:03:40.279 Security Send/Receive: Not Supported 01:03:40.279 Format NVM: Not Supported 01:03:40.279 Firmware Activate/Download: Not Supported 01:03:40.279 Namespace Management: Not Supported 01:03:40.279 Device Self-Test: Not Supported 01:03:40.279 Directives: Not Supported 01:03:40.279 NVMe-MI: Not Supported 01:03:40.279 Virtualization Management: Not Supported 01:03:40.279 Doorbell Buffer Config: Not Supported 01:03:40.279 Get LBA Status Capability: Not Supported 01:03:40.279 Command & Feature Lockdown Capability: Not Supported 01:03:40.279 Abort Command Limit: 4 01:03:40.279 Async Event Request Limit: 4 01:03:40.279 Number of Firmware Slots: N/A 01:03:40.279 Firmware Slot 1 Read-Only: N/A 01:03:40.279 Firmware Activation Without Reset: N/A 01:03:40.279 Multiple Update Detection Support: N/A 01:03:40.279 Firmware Update Granularity: No Information Provided 01:03:40.279 Per-Namespace SMART Log: No 01:03:40.279 Asymmetric Namespace Access Log Page: Not Supported 01:03:40.279 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 01:03:40.279 Command Effects Log Page: Supported 01:03:40.279 Get Log Page Extended Data: Supported 01:03:40.279 Telemetry Log Pages: Not Supported 01:03:40.279 Persistent Event Log Pages: Not Supported 01:03:40.279 Supported Log Pages Log Page: May Support 01:03:40.279 Commands Supported & Effects Log Page: Not Supported 01:03:40.279 Feature Identifiers & Effects Log Page:May Support 01:03:40.279 NVMe-MI Commands & Effects Log Page: May Support 01:03:40.280 Data Area 4 for Telemetry Log: Not Supported 01:03:40.280 Error Log Page Entries Supported: 128 01:03:40.280 Keep Alive: Supported 01:03:40.280 Keep Alive Granularity: 10000 ms 01:03:40.280 01:03:40.280 NVM Command Set Attributes 01:03:40.280 ========================== 01:03:40.280 Submission Queue Entry Size 01:03:40.280 Max: 64 01:03:40.280 Min: 64 01:03:40.280 Completion Queue Entry Size 01:03:40.280 Max: 16 01:03:40.280 Min: 16 01:03:40.280 Number of Namespaces: 32 01:03:40.280 Compare Command: Supported 01:03:40.280 Write Uncorrectable Command: Not Supported 01:03:40.280 Dataset Management Command: Supported 01:03:40.280 Write Zeroes Command: Supported 01:03:40.280 Set Features Save Field: Not Supported 01:03:40.280 Reservations: Supported 01:03:40.280 Timestamp: Not Supported 01:03:40.280 Copy: Supported 01:03:40.280 Volatile Write Cache: Present 01:03:40.280 Atomic Write Unit (Normal): 1 01:03:40.280 Atomic Write Unit (PFail): 1 01:03:40.280 Atomic Compare & Write Unit: 1 01:03:40.280 Fused Compare & Write: Supported 01:03:40.280 Scatter-Gather List 01:03:40.280 SGL Command Set: Supported 01:03:40.280 SGL Keyed: Supported 01:03:40.280 SGL Bit Bucket Descriptor: Not Supported 01:03:40.280 SGL Metadata Pointer: Not Supported 01:03:40.280 Oversized SGL: Not Supported 01:03:40.280 SGL Metadata Address: Not Supported 01:03:40.280 SGL Offset: Supported 01:03:40.280 Transport SGL Data Block: Not Supported 01:03:40.280 Replay Protected Memory Block: Not Supported 01:03:40.280 01:03:40.280 Firmware Slot Information 01:03:40.280 ========================= 01:03:40.280 Active slot: 1 01:03:40.280 Slot 1 Firmware Revision: 25.01 01:03:40.280 01:03:40.280 01:03:40.280 Commands Supported and Effects 01:03:40.280 ============================== 01:03:40.280 Admin Commands 01:03:40.280 -------------- 01:03:40.280 Get Log Page (02h): Supported 01:03:40.280 Identify (06h): Supported 01:03:40.280 Abort (08h): Supported 01:03:40.280 Set Features (09h): Supported 01:03:40.280 Get Features (0Ah): Supported 01:03:40.280 Asynchronous Event Request (0Ch): Supported 01:03:40.280 Keep Alive (18h): Supported 01:03:40.280 I/O Commands 01:03:40.280 ------------ 01:03:40.280 Flush (00h): Supported LBA-Change 01:03:40.280 Write (01h): Supported LBA-Change 01:03:40.280 Read (02h): Supported 01:03:40.280 Compare (05h): Supported 01:03:40.280 Write Zeroes (08h): Supported LBA-Change 01:03:40.280 Dataset Management (09h): Supported LBA-Change 01:03:40.280 Copy (19h): Supported LBA-Change 01:03:40.280 01:03:40.280 Error Log 01:03:40.280 ========= 01:03:40.280 01:03:40.280 Arbitration 01:03:40.280 =========== 01:03:40.280 Arbitration Burst: 1 01:03:40.280 01:03:40.280 Power Management 01:03:40.280 ================ 01:03:40.280 Number of Power States: 1 01:03:40.280 Current Power State: Power State #0 01:03:40.280 Power State #0: 01:03:40.280 Max Power: 0.00 W 01:03:40.280 Non-Operational State: Operational 01:03:40.280 Entry Latency: Not Reported 01:03:40.280 Exit Latency: Not Reported 01:03:40.280 Relative Read Throughput: 0 01:03:40.280 Relative Read Latency: 0 01:03:40.280 Relative Write Throughput: 0 01:03:40.280 Relative Write Latency: 0 01:03:40.280 Idle Power: Not Reported 01:03:40.280 Active Power: Not Reported 01:03:40.280 Non-Operational Permissive Mode: Not Supported 01:03:40.280 01:03:40.280 Health Information 01:03:40.280 ================== 01:03:40.280 Critical Warnings: 01:03:40.280 Available Spare Space: OK 01:03:40.280 Temperature: OK 01:03:40.280 Device Reliability: OK 01:03:40.280 Read Only: No 01:03:40.280 Volatile Memory Backup: OK 01:03:40.280 Current Temperature: 0 Kelvin (-273 Celsius) 01:03:40.280 Temperature Threshold: 0 Kelvin (-273 Celsius) 01:03:40.280 Available Spare: 0% 01:03:40.280 Available Spare Threshold: 0% 01:03:40.280 Life Percentage Used:[2024-12-09 06:02:34.684151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.684159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd81d90) 01:03:40.280 [2024-12-09 06:02:34.684168] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.280 [2024-12-09 06:02:34.684198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3080, cid 7, qid 0 01:03:40.280 [2024-12-09 06:02:34.684547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.280 [2024-12-09 06:02:34.684564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.280 [2024-12-09 06:02:34.684569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.684574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3080) on tqpair=0xd81d90 01:03:40.280 [2024-12-09 06:02:34.684616] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 01:03:40.280 [2024-12-09 06:02:34.684630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2600) on tqpair=0xd81d90 01:03:40.280 [2024-12-09 06:02:34.684637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:40.280 [2024-12-09 06:02:34.684656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2780) on tqpair=0xd81d90 01:03:40.280 [2024-12-09 06:02:34.684664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:40.280 [2024-12-09 06:02:34.684669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2900) on tqpair=0xd81d90 01:03:40.280 [2024-12-09 06:02:34.684674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:40.280 [2024-12-09 06:02:34.684680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.280 [2024-12-09 06:02:34.684685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:03:40.280 [2024-12-09 06:02:34.684695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.684700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.684704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.280 [2024-12-09 06:02:34.684713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.280 [2024-12-09 06:02:34.684741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.280 [2024-12-09 06:02:34.685131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.280 [2024-12-09 06:02:34.685147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.280 [2024-12-09 06:02:34.685152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.685156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.280 [2024-12-09 06:02:34.685165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.685169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.685173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.280 [2024-12-09 06:02:34.685181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.280 [2024-12-09 06:02:34.685207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.280 [2024-12-09 06:02:34.685286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.280 [2024-12-09 06:02:34.685293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.280 [2024-12-09 06:02:34.685297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.685301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.280 [2024-12-09 06:02:34.685306] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 01:03:40.280 [2024-12-09 06:02:34.685312] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 01:03:40.280 [2024-12-09 06:02:34.685322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.685327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.280 [2024-12-09 06:02:34.685331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.281 [2024-12-09 06:02:34.685339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.281 [2024-12-09 06:02:34.685358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.281 [2024-12-09 06:02:34.685430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.281 [2024-12-09 06:02:34.685438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.281 [2024-12-09 06:02:34.685442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.685446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.281 [2024-12-09 06:02:34.685457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.685478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.685482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.281 [2024-12-09 06:02:34.685490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.281 [2024-12-09 06:02:34.685508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.281 [2024-12-09 06:02:34.685942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.281 [2024-12-09 06:02:34.685961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.281 [2024-12-09 06:02:34.685966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.685971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.281 [2024-12-09 06:02:34.685984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.685989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.685993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.281 [2024-12-09 06:02:34.686002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.281 [2024-12-09 06:02:34.686027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.281 [2024-12-09 06:02:34.686532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.281 [2024-12-09 06:02:34.686561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.281 [2024-12-09 06:02:34.686566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.686586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.281 [2024-12-09 06:02:34.686597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.686602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.686615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.281 [2024-12-09 06:02:34.686640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.281 [2024-12-09 06:02:34.686676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.281 [2024-12-09 06:02:34.686816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.281 [2024-12-09 06:02:34.686830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.281 [2024-12-09 06:02:34.686835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.686839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.281 [2024-12-09 06:02:34.686851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.686856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.686860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.281 [2024-12-09 06:02:34.686868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.281 [2024-12-09 06:02:34.686888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.281 [2024-12-09 06:02:34.687269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.281 [2024-12-09 06:02:34.687283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.281 [2024-12-09 06:02:34.687288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.687292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.281 [2024-12-09 06:02:34.687303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.687308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.687312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.281 [2024-12-09 06:02:34.687319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.281 [2024-12-09 06:02:34.687340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.281 [2024-12-09 06:02:34.687617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.281 [2024-12-09 06:02:34.687631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.281 [2024-12-09 06:02:34.687635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.687640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.281 [2024-12-09 06:02:34.691727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.691745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.691750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd81d90) 01:03:40.281 [2024-12-09 06:02:34.691775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:03:40.281 [2024-12-09 06:02:34.691803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc2a80, cid 3, qid 0 01:03:40.281 [2024-12-09 06:02:34.691873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 01:03:40.281 [2024-12-09 06:02:34.691881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 01:03:40.281 [2024-12-09 06:02:34.691884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 01:03:40.281 [2024-12-09 06:02:34.691889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc2a80) on tqpair=0xd81d90 01:03:40.281 [2024-12-09 06:02:34.691898] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 01:03:40.281 0% 01:03:40.281 Data Units Read: 0 01:03:40.281 Data Units Written: 0 01:03:40.281 Host Read Commands: 0 01:03:40.281 Host Write Commands: 0 01:03:40.281 Controller Busy Time: 0 minutes 01:03:40.281 Power Cycles: 0 01:03:40.281 Power On Hours: 0 hours 01:03:40.281 Unsafe Shutdowns: 0 01:03:40.281 Unrecoverable Media Errors: 0 01:03:40.281 Lifetime Error Log Entries: 0 01:03:40.281 Warning Temperature Time: 0 minutes 01:03:40.281 Critical Temperature Time: 0 minutes 01:03:40.281 01:03:40.281 Number of Queues 01:03:40.281 ================ 01:03:40.281 Number of I/O Submission Queues: 127 01:03:40.281 Number of I/O Completion Queues: 127 01:03:40.281 01:03:40.281 Active Namespaces 01:03:40.281 ================= 01:03:40.281 Namespace ID:1 01:03:40.281 Error Recovery Timeout: Unlimited 01:03:40.281 Command Set Identifier: NVM (00h) 01:03:40.281 Deallocate: Supported 01:03:40.281 Deallocated/Unwritten Error: Not Supported 01:03:40.281 Deallocated Read Value: Unknown 01:03:40.281 Deallocate in Write Zeroes: Not Supported 01:03:40.281 Deallocated Guard Field: 0xFFFF 01:03:40.281 Flush: Supported 01:03:40.281 Reservation: Supported 01:03:40.281 Namespace Sharing Capabilities: Multiple Controllers 01:03:40.281 Size (in LBAs): 131072 (0GiB) 01:03:40.281 Capacity (in LBAs): 131072 (0GiB) 01:03:40.281 Utilization (in LBAs): 131072 (0GiB) 01:03:40.281 NGUID: ABCDEF0123456789ABCDEF0123456789 01:03:40.281 EUI64: ABCDEF0123456789 01:03:40.281 UUID: 8efac89d-951f-46e5-9f33-ab22b6f04a01 01:03:40.281 Thin Provisioning: Not Supported 01:03:40.281 Per-NS Atomic Units: Yes 01:03:40.281 Atomic Boundary Size (Normal): 0 01:03:40.281 Atomic Boundary Size (PFail): 0 01:03:40.281 Atomic Boundary Offset: 0 01:03:40.281 Maximum Single Source Range Length: 65535 01:03:40.281 Maximum Copy Length: 65535 01:03:40.281 Maximum Source Range Count: 1 01:03:40.281 NGUID/EUI64 Never Reused: No 01:03:40.281 Namespace Write Protected: No 01:03:40.281 Number of LBA Formats: 1 01:03:40.281 Current LBA Format: LBA Format #00 01:03:40.281 LBA Format #00: Data Size: 512 Metadata Size: 0 01:03:40.281 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:40.281 rmmod nvme_tcp 01:03:40.281 rmmod nvme_fabrics 01:03:40.281 rmmod nvme_keyring 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 86389 ']' 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 86389 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 86389 ']' 01:03:40.281 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 86389 01:03:40.282 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 01:03:40.541 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:40.541 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86389 01:03:40.541 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:40.541 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:40.541 killing process with pid 86389 01:03:40.541 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86389' 01:03:40.541 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 86389 01:03:40.541 06:02:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 86389 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:40.541 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 01:03:40.815 01:03:40.815 real 0m2.259s 01:03:40.815 user 0m4.711s 01:03:40.815 sys 0m0.720s 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 01:03:40.815 ************************************ 01:03:40.815 END TEST nvmf_identify 01:03:40.815 ************************************ 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:03:40.815 ************************************ 01:03:40.815 START TEST nvmf_perf 01:03:40.815 ************************************ 01:03:40.815 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 01:03:41.078 * Looking for test storage... 01:03:41.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:41.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:41.078 --rc genhtml_branch_coverage=1 01:03:41.078 --rc genhtml_function_coverage=1 01:03:41.078 --rc genhtml_legend=1 01:03:41.078 --rc geninfo_all_blocks=1 01:03:41.078 --rc geninfo_unexecuted_blocks=1 01:03:41.078 01:03:41.078 ' 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:41.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:41.078 --rc genhtml_branch_coverage=1 01:03:41.078 --rc genhtml_function_coverage=1 01:03:41.078 --rc genhtml_legend=1 01:03:41.078 --rc geninfo_all_blocks=1 01:03:41.078 --rc geninfo_unexecuted_blocks=1 01:03:41.078 01:03:41.078 ' 01:03:41.078 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:41.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:41.078 --rc genhtml_branch_coverage=1 01:03:41.079 --rc genhtml_function_coverage=1 01:03:41.079 --rc genhtml_legend=1 01:03:41.079 --rc geninfo_all_blocks=1 01:03:41.079 --rc geninfo_unexecuted_blocks=1 01:03:41.079 01:03:41.079 ' 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:41.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:41.079 --rc genhtml_branch_coverage=1 01:03:41.079 --rc genhtml_function_coverage=1 01:03:41.079 --rc genhtml_legend=1 01:03:41.079 --rc geninfo_all_blocks=1 01:03:41.079 --rc geninfo_unexecuted_blocks=1 01:03:41.079 01:03:41.079 ' 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:41.079 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:41.079 Cannot find device "nvmf_init_br" 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:41.079 Cannot find device "nvmf_init_br2" 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:41.079 Cannot find device "nvmf_tgt_br" 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:41.079 Cannot find device "nvmf_tgt_br2" 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:41.079 Cannot find device "nvmf_init_br" 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:41.079 Cannot find device "nvmf_init_br2" 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:41.079 Cannot find device "nvmf_tgt_br" 01:03:41.079 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 01:03:41.080 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:41.080 Cannot find device "nvmf_tgt_br2" 01:03:41.080 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 01:03:41.080 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:41.080 Cannot find device "nvmf_br" 01:03:41.080 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 01:03:41.080 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:41.338 Cannot find device "nvmf_init_if" 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:41.338 Cannot find device "nvmf_init_if2" 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:41.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:41.338 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:41.338 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:41.339 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:41.339 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 01:03:41.339 01:03:41.339 --- 10.0.0.3 ping statistics --- 01:03:41.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:41.339 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 01:03:41.339 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:41.597 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:41.597 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.359 ms 01:03:41.597 01:03:41.597 --- 10.0.0.4 ping statistics --- 01:03:41.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:41.597 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:41.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:41.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:03:41.597 01:03:41.597 --- 10.0.0.1 ping statistics --- 01:03:41.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:41.597 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:41.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:41.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 01:03:41.597 01:03:41.597 --- 10.0.0.2 ping statistics --- 01:03:41.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:41.597 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:41.597 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=86651 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 86651 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 86651 ']' 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:41.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:41.598 06:02:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:03:41.598 [2024-12-09 06:02:36.026459] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:41.598 [2024-12-09 06:02:36.026558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:41.598 [2024-12-09 06:02:36.177277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:41.856 [2024-12-09 06:02:36.210313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:41.856 [2024-12-09 06:02:36.210373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:41.856 [2024-12-09 06:02:36.210399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:41.856 [2024-12-09 06:02:36.210406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:41.856 [2024-12-09 06:02:36.210413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:41.856 [2024-12-09 06:02:36.211163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:41.857 [2024-12-09 06:02:36.211223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:41.857 [2024-12-09 06:02:36.211373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:41.857 [2024-12-09 06:02:36.211377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:42.793 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:42.793 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 01:03:42.793 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:03:42.793 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:42.793 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:03:42.793 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:03:42.793 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:03:42.793 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 01:03:43.052 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 01:03:43.052 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 01:03:43.310 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 01:03:43.310 06:02:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:03:43.569 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 01:03:43.569 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 01:03:43.569 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 01:03:43.569 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 01:03:43.569 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 01:03:43.827 [2024-12-09 06:02:38.352079] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:43.827 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:44.085 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:03:44.085 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:03:44.652 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 01:03:44.652 06:02:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:03:44.652 06:02:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:45.215 [2024-12-09 06:02:39.501535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:45.215 06:02:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:03:45.215 06:02:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 01:03:45.215 06:02:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:03:45.215 06:02:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 01:03:45.215 06:02:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 01:03:46.590 Initializing NVMe Controllers 01:03:46.590 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 01:03:46.590 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 01:03:46.590 Initialization complete. Launching workers. 01:03:46.590 ======================================================== 01:03:46.590 Latency(us) 01:03:46.590 Device Information : IOPS MiB/s Average min max 01:03:46.590 PCIE (0000:00:10.0) NSID 1 from core 0: 23104.00 90.25 1385.12 360.98 8369.88 01:03:46.590 ======================================================== 01:03:46.590 Total : 23104.00 90.25 1385.12 360.98 8369.88 01:03:46.590 01:03:46.590 06:02:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:03:47.969 Initializing NVMe Controllers 01:03:47.969 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:03:47.969 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:47.969 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:03:47.969 Initialization complete. Launching workers. 01:03:47.969 ======================================================== 01:03:47.969 Latency(us) 01:03:47.969 Device Information : IOPS MiB/s Average min max 01:03:47.969 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3524.96 13.77 283.33 103.21 7280.50 01:03:47.969 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8194.63 1445.72 12062.52 01:03:47.969 ======================================================== 01:03:47.969 Total : 3647.96 14.25 550.08 103.21 12062.52 01:03:47.969 01:03:47.969 06:02:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:03:48.907 Initializing NVMe Controllers 01:03:48.907 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:03:48.907 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:48.907 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:03:48.907 Initialization complete. Launching workers. 01:03:48.907 ======================================================== 01:03:48.907 Latency(us) 01:03:48.907 Device Information : IOPS MiB/s Average min max 01:03:48.907 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9644.00 37.67 3321.06 614.82 8497.43 01:03:48.907 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2650.00 10.35 12189.50 3389.59 23387.31 01:03:48.907 ======================================================== 01:03:48.907 Total : 12294.00 48.02 5232.67 614.82 23387.31 01:03:48.907 01:03:49.166 06:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 01:03:49.166 06:02:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:03:51.702 Initializing NVMe Controllers 01:03:51.702 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:03:51.702 Controller IO queue size 128, less than required. 01:03:51.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:51.702 Controller IO queue size 128, less than required. 01:03:51.702 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:51.702 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:51.702 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:03:51.702 Initialization complete. Launching workers. 01:03:51.702 ======================================================== 01:03:51.702 Latency(us) 01:03:51.702 Device Information : IOPS MiB/s Average min max 01:03:51.702 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1776.13 444.03 72923.01 46644.57 119438.64 01:03:51.702 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 615.52 153.88 237488.65 98932.46 491985.53 01:03:51.702 ======================================================== 01:03:51.702 Total : 2391.65 597.91 115276.23 46644.57 491985.53 01:03:51.702 01:03:51.702 06:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 01:03:51.961 Initializing NVMe Controllers 01:03:51.961 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:03:51.961 Controller IO queue size 128, less than required. 01:03:51.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:51.961 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 01:03:51.961 Controller IO queue size 128, less than required. 01:03:51.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:51.961 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 01:03:51.961 WARNING: Some requested NVMe devices were skipped 01:03:51.961 No valid NVMe controllers or AIO or URING devices found 01:03:51.961 06:02:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 01:03:54.492 Initializing NVMe Controllers 01:03:54.492 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:03:54.492 Controller IO queue size 128, less than required. 01:03:54.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:54.492 Controller IO queue size 128, less than required. 01:03:54.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:03:54.492 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:03:54.492 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:03:54.492 Initialization complete. Launching workers. 01:03:54.492 01:03:54.492 ==================== 01:03:54.492 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 01:03:54.492 TCP transport: 01:03:54.492 polls: 7941 01:03:54.492 idle_polls: 4468 01:03:54.492 sock_completions: 3473 01:03:54.492 nvme_completions: 4619 01:03:54.492 submitted_requests: 6972 01:03:54.492 queued_requests: 1 01:03:54.492 01:03:54.492 ==================== 01:03:54.492 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 01:03:54.492 TCP transport: 01:03:54.492 polls: 8084 01:03:54.492 idle_polls: 4603 01:03:54.492 sock_completions: 3481 01:03:54.492 nvme_completions: 6699 01:03:54.492 submitted_requests: 9974 01:03:54.492 queued_requests: 1 01:03:54.492 ======================================================== 01:03:54.492 Latency(us) 01:03:54.492 Device Information : IOPS MiB/s Average min max 01:03:54.492 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1154.31 288.58 112421.28 69774.48 190391.75 01:03:54.492 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1674.23 418.56 77490.77 33151.09 135630.41 01:03:54.492 ======================================================== 01:03:54.492 Total : 2828.54 707.14 91745.73 33151.09 190391.75 01:03:54.492 01:03:54.492 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 01:03:54.749 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:03:55.007 rmmod nvme_tcp 01:03:55.007 rmmod nvme_fabrics 01:03:55.007 rmmod nvme_keyring 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 86651 ']' 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 86651 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 86651 ']' 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 86651 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86651 01:03:55.007 killing process with pid 86651 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86651' 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 86651 01:03:55.007 06:02:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 86651 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:03:55.572 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:55.830 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 01:03:55.831 01:03:55.831 real 0m15.005s 01:03:55.831 user 0m55.106s 01:03:55.831 sys 0m3.533s 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 01:03:55.831 ************************************ 01:03:55.831 END TEST nvmf_perf 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 01:03:55.831 ************************************ 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:03:55.831 06:02:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:03:56.090 ************************************ 01:03:56.090 START TEST nvmf_fio_host 01:03:56.090 ************************************ 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 01:03:56.090 * Looking for test storage... 01:03:56.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:03:56.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:56.090 --rc genhtml_branch_coverage=1 01:03:56.090 --rc genhtml_function_coverage=1 01:03:56.090 --rc genhtml_legend=1 01:03:56.090 --rc geninfo_all_blocks=1 01:03:56.090 --rc geninfo_unexecuted_blocks=1 01:03:56.090 01:03:56.090 ' 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:03:56.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:56.090 --rc genhtml_branch_coverage=1 01:03:56.090 --rc genhtml_function_coverage=1 01:03:56.090 --rc genhtml_legend=1 01:03:56.090 --rc geninfo_all_blocks=1 01:03:56.090 --rc geninfo_unexecuted_blocks=1 01:03:56.090 01:03:56.090 ' 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:03:56.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:56.090 --rc genhtml_branch_coverage=1 01:03:56.090 --rc genhtml_function_coverage=1 01:03:56.090 --rc genhtml_legend=1 01:03:56.090 --rc geninfo_all_blocks=1 01:03:56.090 --rc geninfo_unexecuted_blocks=1 01:03:56.090 01:03:56.090 ' 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:03:56.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:03:56.090 --rc genhtml_branch_coverage=1 01:03:56.090 --rc genhtml_function_coverage=1 01:03:56.090 --rc genhtml_legend=1 01:03:56.090 --rc geninfo_all_blocks=1 01:03:56.090 --rc geninfo_unexecuted_blocks=1 01:03:56.090 01:03:56.090 ' 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:56.090 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:03:56.091 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:03:56.091 Cannot find device "nvmf_init_br" 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 01:03:56.091 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:03:56.349 Cannot find device "nvmf_init_br2" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:03:56.349 Cannot find device "nvmf_tgt_br" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:03:56.349 Cannot find device "nvmf_tgt_br2" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:03:56.349 Cannot find device "nvmf_init_br" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:03:56.349 Cannot find device "nvmf_init_br2" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:03:56.349 Cannot find device "nvmf_tgt_br" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:03:56.349 Cannot find device "nvmf_tgt_br2" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:03:56.349 Cannot find device "nvmf_br" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:03:56.349 Cannot find device "nvmf_init_if" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:03:56.349 Cannot find device "nvmf_init_if2" 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:03:56.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:03:56.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:03:56.349 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:03:56.350 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:03:56.608 06:02:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:03:56.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:03:56.608 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 01:03:56.608 01:03:56.608 --- 10.0.0.3 ping statistics --- 01:03:56.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:56.608 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:03:56.608 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:03:56.608 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 01:03:56.608 01:03:56.608 --- 10.0.0.4 ping statistics --- 01:03:56.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:56.608 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:03:56.608 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:03:56.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:03:56.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 01:03:56.608 01:03:56.608 --- 10.0.0.1 ping statistics --- 01:03:56.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:56.608 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:03:56.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:03:56.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 01:03:56.609 01:03:56.609 --- 10.0.0.2 ping statistics --- 01:03:56.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:03:56.609 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87186 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87186 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 87186 ']' 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:03:56.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:03:56.609 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:03:56.609 [2024-12-09 06:02:51.141628] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:03:56.609 [2024-12-09 06:02:51.142374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:03:56.868 [2024-12-09 06:02:51.294929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:03:56.868 [2024-12-09 06:02:51.333133] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:03:56.868 [2024-12-09 06:02:51.333186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:03:56.868 [2024-12-09 06:02:51.333199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:03:56.868 [2024-12-09 06:02:51.333209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:03:56.868 [2024-12-09 06:02:51.333217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:03:56.868 [2024-12-09 06:02:51.334171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:03:56.868 [2024-12-09 06:02:51.334308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:03:56.868 [2024-12-09 06:02:51.334439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:03:56.868 [2024-12-09 06:02:51.334444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:03:56.868 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:03:56.868 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 01:03:56.868 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:03:57.434 [2024-12-09 06:02:51.716424] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:03:57.434 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 01:03:57.434 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:03:57.434 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:03:57.434 06:02:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 01:03:57.692 Malloc1 01:03:57.692 06:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:03:57.952 06:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:03:58.211 06:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:03:58.470 [2024-12-09 06:02:52.945449] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:03:58.470 06:02:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:03:58.729 06:02:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 01:03:58.988 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 01:03:58.988 fio-3.35 01:03:58.988 Starting 1 thread 01:04:01.518 01:04:01.518 test: (groupid=0, jobs=1): err= 0: pid=87309: Mon Dec 9 06:02:55 2024 01:04:01.518 read: IOPS=8999, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2007msec) 01:04:01.518 slat (nsec): min=1956, max=356788, avg=2715.30, stdev=3664.74 01:04:01.518 clat (usec): min=3350, max=13827, avg=7421.68, stdev=552.41 01:04:01.518 lat (usec): min=3389, max=13829, avg=7424.40, stdev=552.29 01:04:01.518 clat percentiles (usec): 01:04:01.518 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 6980], 01:04:01.518 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 01:04:01.518 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 01:04:01.518 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[11338], 99.95th=[12780], 01:04:01.518 | 99.99th=[13698] 01:04:01.518 bw ( KiB/s): min=35264, max=36632, per=99.97%, avg=35986.00, stdev=572.41, samples=4 01:04:01.518 iops : min= 8816, max= 9158, avg=8996.50, stdev=143.10, samples=4 01:04:01.518 write: IOPS=9018, BW=35.2MiB/s (36.9MB/s)(70.7MiB/2007msec); 0 zone resets 01:04:01.518 slat (usec): min=2, max=276, avg= 2.77, stdev= 2.45 01:04:01.518 clat (usec): min=2557, max=13048, avg=6722.30, stdev=495.38 01:04:01.518 lat (usec): min=2571, max=13051, avg=6725.07, stdev=495.32 01:04:01.518 clat percentiles (usec): 01:04:01.518 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 01:04:01.518 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 01:04:01.518 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7373], 01:04:01.518 | 99.00th=[ 7767], 99.50th=[ 8291], 99.90th=[11338], 99.95th=[12256], 01:04:01.518 | 99.99th=[13042] 01:04:01.518 bw ( KiB/s): min=35904, max=36320, per=100.00%, avg=36082.00, stdev=184.10, samples=4 01:04:01.518 iops : min= 8976, max= 9080, avg=9020.50, stdev=46.03, samples=4 01:04:01.518 lat (msec) : 4=0.08%, 10=99.71%, 20=0.21% 01:04:01.518 cpu : usr=70.09%, sys=21.83%, ctx=9, majf=0, minf=7 01:04:01.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 01:04:01.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:01.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:01.518 issued rwts: total=18061,18100,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:01.518 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:01.518 01:04:01.518 Run status group 0 (all jobs): 01:04:01.518 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.0MB), run=2007-2007msec 01:04:01.518 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.7MiB (74.1MB), run=2007-2007msec 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 01:04:01.519 06:02:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 01:04:01.519 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 01:04:01.519 fio-3.35 01:04:01.519 Starting 1 thread 01:04:04.117 01:04:04.117 test: (groupid=0, jobs=1): err= 0: pid=87353: Mon Dec 9 06:02:58 2024 01:04:04.117 read: IOPS=7992, BW=125MiB/s (131MB/s)(251MiB/2006msec) 01:04:04.117 slat (usec): min=2, max=119, avg= 3.88, stdev= 2.22 01:04:04.117 clat (usec): min=2654, max=17688, avg=9305.17, stdev=2285.96 01:04:04.117 lat (usec): min=2657, max=17691, avg=9309.05, stdev=2286.00 01:04:04.117 clat percentiles (usec): 01:04:04.117 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7177], 01:04:04.117 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 01:04:04.117 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11994], 95.00th=[12911], 01:04:04.117 | 99.00th=[15008], 99.50th=[15533], 99.90th=[17171], 99.95th=[17433], 01:04:04.117 | 99.99th=[17695] 01:04:04.117 bw ( KiB/s): min=51712, max=76608, per=51.30%, avg=65600.00, stdev=10298.86, samples=4 01:04:04.117 iops : min= 3232, max= 4788, avg=4100.00, stdev=643.68, samples=4 01:04:04.117 write: IOPS=4873, BW=76.2MiB/s (79.8MB/s)(135MiB/1773msec); 0 zone resets 01:04:04.117 slat (usec): min=31, max=200, avg=40.01, stdev= 8.14 01:04:04.117 clat (usec): min=4455, max=18428, avg=11550.39, stdev=2056.52 01:04:04.117 lat (usec): min=4492, max=18462, avg=11590.40, stdev=2056.77 01:04:04.117 clat percentiles (usec): 01:04:04.117 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 01:04:04.117 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 01:04:04.117 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14615], 95.00th=[15401], 01:04:04.117 | 99.00th=[16581], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 01:04:04.117 | 99.99th=[18482] 01:04:04.117 bw ( KiB/s): min=54656, max=79296, per=87.97%, avg=68600.00, stdev=10235.54, samples=4 01:04:04.117 iops : min= 3416, max= 4956, avg=4287.50, stdev=639.72, samples=4 01:04:04.117 lat (msec) : 4=0.18%, 10=46.74%, 20=53.08% 01:04:04.117 cpu : usr=76.91%, sys=14.86%, ctx=4, majf=0, minf=28 01:04:04.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 01:04:04.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:04:04.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:04:04.117 issued rwts: total=16033,8641,0,0 short=0,0,0,0 dropped=0,0,0,0 01:04:04.117 latency : target=0, window=0, percentile=100.00%, depth=128 01:04:04.117 01:04:04.117 Run status group 0 (all jobs): 01:04:04.117 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2006-2006msec 01:04:04.117 WRITE: bw=76.2MiB/s (79.8MB/s), 76.2MiB/s-76.2MiB/s (79.8MB/s-79.8MB/s), io=135MiB (142MB), run=1773-1773msec 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 01:04:04.117 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:04.118 rmmod nvme_tcp 01:04:04.118 rmmod nvme_fabrics 01:04:04.118 rmmod nvme_keyring 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 87186 ']' 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 87186 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 87186 ']' 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 87186 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87186 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:04:04.118 killing process with pid 87186 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87186' 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 87186 01:04:04.118 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 87186 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:04:04.377 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:04:04.636 06:02:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:04:04.636 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:04:04.636 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 01:04:04.637 01:04:04.637 real 0m8.685s 01:04:04.637 user 0m34.680s 01:04:04.637 sys 0m2.236s 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:04.637 ************************************ 01:04:04.637 END TEST nvmf_fio_host 01:04:04.637 ************************************ 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:04:04.637 ************************************ 01:04:04.637 START TEST nvmf_failover 01:04:04.637 ************************************ 01:04:04.637 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 01:04:04.896 * Looking for test storage... 01:04:04.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:04.896 --rc genhtml_branch_coverage=1 01:04:04.896 --rc genhtml_function_coverage=1 01:04:04.896 --rc genhtml_legend=1 01:04:04.896 --rc geninfo_all_blocks=1 01:04:04.896 --rc geninfo_unexecuted_blocks=1 01:04:04.896 01:04:04.896 ' 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:04.896 --rc genhtml_branch_coverage=1 01:04:04.896 --rc genhtml_function_coverage=1 01:04:04.896 --rc genhtml_legend=1 01:04:04.896 --rc geninfo_all_blocks=1 01:04:04.896 --rc geninfo_unexecuted_blocks=1 01:04:04.896 01:04:04.896 ' 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:04.896 --rc genhtml_branch_coverage=1 01:04:04.896 --rc genhtml_function_coverage=1 01:04:04.896 --rc genhtml_legend=1 01:04:04.896 --rc geninfo_all_blocks=1 01:04:04.896 --rc geninfo_unexecuted_blocks=1 01:04:04.896 01:04:04.896 ' 01:04:04.896 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:04.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:04.897 --rc genhtml_branch_coverage=1 01:04:04.897 --rc genhtml_function_coverage=1 01:04:04.897 --rc genhtml_legend=1 01:04:04.897 --rc geninfo_all_blocks=1 01:04:04.897 --rc geninfo_unexecuted_blocks=1 01:04:04.897 01:04:04.897 ' 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:04:04.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:04:04.897 Cannot find device "nvmf_init_br" 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:04:04.897 Cannot find device "nvmf_init_br2" 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:04:04.897 Cannot find device "nvmf_tgt_br" 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:04:04.897 Cannot find device "nvmf_tgt_br2" 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:04:04.897 Cannot find device "nvmf_init_br" 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:04:04.897 Cannot find device "nvmf_init_br2" 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:04:04.897 Cannot find device "nvmf_tgt_br" 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:04:04.897 Cannot find device "nvmf_tgt_br2" 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 01:04:04.897 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:04:04.897 Cannot find device "nvmf_br" 01:04:04.898 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 01:04:04.898 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:04:05.155 Cannot find device "nvmf_init_if" 01:04:05.155 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 01:04:05.155 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:04:05.155 Cannot find device "nvmf_init_if2" 01:04:05.155 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:05.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:05.156 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:04:05.156 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:04:05.414 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:04:05.414 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 01:04:05.414 01:04:05.414 --- 10.0.0.3 ping statistics --- 01:04:05.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:05.414 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:04:05.414 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:04:05.414 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 01:04:05.414 01:04:05.414 --- 10.0.0.4 ping statistics --- 01:04:05.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:05.414 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:04:05.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:05.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 01:04:05.414 01:04:05.414 --- 10.0.0.1 ping statistics --- 01:04:05.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:05.414 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:04:05.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:05.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 01:04:05.414 01:04:05.414 --- 10.0.0.2 ping statistics --- 01:04:05.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:05.414 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=87626 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 87626 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87626 ']' 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:05.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:05.414 06:02:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:05.414 [2024-12-09 06:02:59.860101] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:04:05.414 [2024-12-09 06:02:59.860196] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:05.674 [2024-12-09 06:03:00.012081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:04:05.674 [2024-12-09 06:03:00.050911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:05.674 [2024-12-09 06:03:00.050978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:05.674 [2024-12-09 06:03:00.050994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:05.674 [2024-12-09 06:03:00.051004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:05.674 [2024-12-09 06:03:00.051013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:05.674 [2024-12-09 06:03:00.051879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:04:05.674 [2024-12-09 06:03:00.052024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:04:05.674 [2024-12-09 06:03:00.052030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:04:05.674 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:05.674 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:04:05.674 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:05.674 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:05.674 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:05.674 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:05.674 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:04:05.933 [2024-12-09 06:03:00.479488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:05.933 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:04:06.499 Malloc0 01:04:06.499 06:03:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:04:06.757 06:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:04:07.073 06:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:04:07.331 [2024-12-09 06:03:01.722684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:07.331 06:03:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:04:07.589 [2024-12-09 06:03:01.998901] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:04:07.589 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:04:07.847 [2024-12-09 06:03:02.271171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87724 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87724 /var/tmp/bdevperf.sock 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87724 ']' 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:07.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:07.847 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:08.105 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:08.105 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:04:08.105 06:03:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:04:08.671 NVMe0n1 01:04:08.671 06:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:04:08.954 01:04:08.954 06:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87758 01:04:08.954 06:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:04:08.954 06:03:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 01:04:09.898 06:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:04:10.465 06:03:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 01:04:13.758 06:03:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:04:13.758 01:04:13.758 06:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:04:14.024 [2024-12-09 06:03:08.413790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.413998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 [2024-12-09 06:03:08.414122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ce290 is same with the state(6) to be set 01:04:14.025 06:03:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 01:04:17.346 06:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:04:17.346 [2024-12-09 06:03:11.734262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:17.346 06:03:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 01:04:18.280 06:03:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:04:18.539 [2024-12-09 06:03:13.034630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.539 [2024-12-09 06:03:13.034830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.034992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 [2024-12-09 06:03:13.035191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d8e50 is same with the state(6) to be set 01:04:18.540 06:03:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87758 01:04:25.122 { 01:04:25.122 "results": [ 01:04:25.122 { 01:04:25.122 "job": "NVMe0n1", 01:04:25.122 "core_mask": "0x1", 01:04:25.122 "workload": "verify", 01:04:25.122 "status": "finished", 01:04:25.122 "verify_range": { 01:04:25.122 "start": 0, 01:04:25.122 "length": 16384 01:04:25.122 }, 01:04:25.122 "queue_depth": 128, 01:04:25.122 "io_size": 4096, 01:04:25.122 "runtime": 15.014471, 01:04:25.122 "iops": 9025.226396587665, 01:04:25.122 "mibps": 35.254790611670565, 01:04:25.122 "io_failed": 3485, 01:04:25.122 "io_timeout": 0, 01:04:25.122 "avg_latency_us": 13795.018531028809, 01:04:25.122 "min_latency_us": 647.9127272727272, 01:04:25.122 "max_latency_us": 22639.70909090909 01:04:25.122 } 01:04:25.122 ], 01:04:25.122 "core_count": 1 01:04:25.122 } 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87724 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87724 ']' 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87724 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87724 01:04:25.122 killing process with pid 87724 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87724' 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87724 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87724 01:04:25.122 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:04:25.122 [2024-12-09 06:03:02.351618] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:04:25.122 [2024-12-09 06:03:02.351767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87724 ] 01:04:25.122 [2024-12-09 06:03:02.498405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:25.122 [2024-12-09 06:03:02.538246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:04:25.122 Running I/O for 15 seconds... 01:04:25.122 8884.00 IOPS, 34.70 MiB/s [2024-12-09T06:03:19.709Z] [2024-12-09 06:03:04.725573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.725671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.725718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.725749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.725781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.725810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.725870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.725899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.725929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.725958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.725974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.725988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.726294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.726323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.726352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.726381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.726410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.726448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.726478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.123 [2024-12-09 06:03:04.726507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.123 [2024-12-09 06:03:04.726948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.123 [2024-12-09 06:03:04.726962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.726984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.124 [2024-12-09 06:03:04.727270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.727978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.727993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.728008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.728023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.728054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.728068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.728089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.728104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.728119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.728133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.728148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.728161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.728176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.124 [2024-12-09 06:03:04.728190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.124 [2024-12-09 06:03:04.728205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.125 [2024-12-09 06:03:04.728221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.125 [2024-12-09 06:03:04.728250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.125 [2024-12-09 06:03:04.728279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84776 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84784 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84792 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84800 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84808 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84816 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84824 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84832 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84840 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84848 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84856 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84864 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84872 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.728961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.728975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.728985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.728995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84880 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.729009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.729022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.729032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.729043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84888 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.729056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.729071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.729081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.729092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84896 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.729105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.729119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.729129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.729139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84904 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.729153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.729166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.729177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.729187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84912 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.729200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.729214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.729224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.729234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84920 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.729248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.729268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.729280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.729291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84928 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.729304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.729318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.729328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.125 [2024-12-09 06:03:04.729338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84936 len:8 PRP1 0x0 PRP2 0x0 01:04:25.125 [2024-12-09 06:03:04.729352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.125 [2024-12-09 06:03:04.729365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.125 [2024-12-09 06:03:04.729375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84944 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84952 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84960 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84968 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84976 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84984 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84992 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85000 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85008 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85016 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85024 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85032 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.729960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.729970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.729981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85040 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.729994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.730018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.730035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85048 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.730049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.730075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.730086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85056 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.730099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.730123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.730133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85064 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.730147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.730171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.730181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85072 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.730194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.730218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.730229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85080 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.730242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.730268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.730278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84368 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.730291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.730315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.730326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84376 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.730339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.730362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.730373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84384 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.730386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.730400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.741337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.741382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84392 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.741413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.741448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.741472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.126 [2024-12-09 06:03:04.741488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84400 len:8 PRP1 0x0 PRP2 0x0 01:04:25.126 [2024-12-09 06:03:04.741507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.126 [2024-12-09 06:03:04.741526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.126 [2024-12-09 06:03:04.741540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.127 [2024-12-09 06:03:04.741555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84408 len:8 PRP1 0x0 PRP2 0x0 01:04:25.127 [2024-12-09 06:03:04.741574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:04.741593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.127 [2024-12-09 06:03:04.741606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.127 [2024-12-09 06:03:04.741621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84416 len:8 PRP1 0x0 PRP2 0x0 01:04:25.127 [2024-12-09 06:03:04.741639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:04.741691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.127 [2024-12-09 06:03:04.741706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.127 [2024-12-09 06:03:04.741720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84424 len:8 PRP1 0x0 PRP2 0x0 01:04:25.127 [2024-12-09 06:03:04.741739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:04.741807] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 01:04:25.127 [2024-12-09 06:03:04.741898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.127 [2024-12-09 06:03:04.741928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:04.741950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.127 [2024-12-09 06:03:04.741970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:04.741990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.127 [2024-12-09 06:03:04.742010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:04.742030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.127 [2024-12-09 06:03:04.742049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:04.742068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:04:25.127 [2024-12-09 06:03:04.742191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa00 (9): Bad file descriptor 01:04:25.127 [2024-12-09 06:03:04.747986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:04:25.127 [2024-12-09 06:03:04.778101] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:04:25.127 9077.00 IOPS, 35.46 MiB/s [2024-12-09T06:03:19.713Z] 9242.67 IOPS, 36.10 MiB/s [2024-12-09T06:03:19.713Z] 9189.50 IOPS, 35.90 MiB/s [2024-12-09T06:03:19.713Z] [2024-12-09 06:03:08.415384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.415981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.415997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.416011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.416026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.416040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.416056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.416070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.416085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.416099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.127 [2024-12-09 06:03:08.416114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.127 [2024-12-09 06:03:08.416128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.416978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.416995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.128 [2024-12-09 06:03:08.417329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.128 [2024-12-09 06:03:08.417342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.417953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.417983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.417999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.418171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.418201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.418230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.418259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.418289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.418319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.129 [2024-12-09 06:03:08.418348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.129 [2024-12-09 06:03:08.418561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.129 [2024-12-09 06:03:08.418577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.418973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.418990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.130 [2024-12-09 06:03:08.419447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.130 [2024-12-09 06:03:08.419493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.130 [2024-12-09 06:03:08.419504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103872 len:8 PRP1 0x0 PRP2 0x0 01:04:25.130 [2024-12-09 06:03:08.419518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419569] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 01:04:25.130 [2024-12-09 06:03:08.419628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.130 [2024-12-09 06:03:08.419665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.130 [2024-12-09 06:03:08.419696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.130 [2024-12-09 06:03:08.419724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.130 [2024-12-09 06:03:08.419755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:08.419769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:04:25.130 [2024-12-09 06:03:08.419820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa00 (9): Bad file descriptor 01:04:25.130 [2024-12-09 06:03:08.423834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:04:25.130 [2024-12-09 06:03:08.453903] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 01:04:25.130 9079.60 IOPS, 35.47 MiB/s [2024-12-09T06:03:19.716Z] 9064.17 IOPS, 35.41 MiB/s [2024-12-09T06:03:19.716Z] 9041.29 IOPS, 35.32 MiB/s [2024-12-09T06:03:19.716Z] 9036.75 IOPS, 35.30 MiB/s [2024-12-09T06:03:19.716Z] 9018.33 IOPS, 35.23 MiB/s [2024-12-09T06:03:19.716Z] [2024-12-09 06:03:13.035362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.130 [2024-12-09 06:03:13.035404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:13.035447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.130 [2024-12-09 06:03:13.035464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.130 [2024-12-09 06:03:13.035478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.130 [2024-12-09 06:03:13.035490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.035962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.035976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.131 [2024-12-09 06:03:13.036421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.131 [2024-12-09 06:03:13.036452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.036535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.036570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.036614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.036642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.036669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.036697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.036737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.036976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.132 [2024-12-09 06:03:13.036989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.132 [2024-12-09 06:03:13.037603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.132 [2024-12-09 06:03:13.037616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.037642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.037687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.037714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.037744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.037772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.037799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.037828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.037855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.037882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.037910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.037936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.037963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.037978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.037991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:04:25.133 [2024-12-09 06:03:13.038604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.133 [2024-12-09 06:03:13.038670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.133 [2024-12-09 06:03:13.038689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.038979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.038994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.039008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.039037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.039091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.039119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:04:25.134 [2024-12-09 06:03:13.039147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:04:25.134 [2024-12-09 06:03:13.039202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:04:25.134 [2024-12-09 06:03:13.039215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49448 len:8 PRP1 0x0 PRP2 0x0 01:04:25.134 [2024-12-09 06:03:13.039228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039278] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 01:04:25.134 [2024-12-09 06:03:13.039334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.134 [2024-12-09 06:03:13.039355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.134 [2024-12-09 06:03:13.039386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.134 [2024-12-09 06:03:13.039413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:25.134 [2024-12-09 06:03:13.039441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:25.134 [2024-12-09 06:03:13.039454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:04:25.134 [2024-12-09 06:03:13.043200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:04:25.134 [2024-12-09 06:03:13.043237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x139fa00 (9): Bad file descriptor 01:04:25.134 [2024-12-09 06:03:13.068642] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 01:04:25.134 9040.80 IOPS, 35.32 MiB/s [2024-12-09T06:03:19.720Z] 9041.64 IOPS, 35.32 MiB/s [2024-12-09T06:03:19.720Z] 9047.58 IOPS, 35.34 MiB/s [2024-12-09T06:03:19.720Z] 9088.46 IOPS, 35.50 MiB/s [2024-12-09T06:03:19.720Z] 9078.50 IOPS, 35.46 MiB/s [2024-12-09T06:03:19.720Z] 9026.00 IOPS, 35.26 MiB/s 01:04:25.134 Latency(us) 01:04:25.134 [2024-12-09T06:03:19.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:04:25.134 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:04:25.134 Verification LBA range: start 0x0 length 0x4000 01:04:25.134 NVMe0n1 : 15.01 9025.23 35.25 232.11 0.00 13795.02 647.91 22639.71 01:04:25.134 [2024-12-09T06:03:19.720Z] =================================================================================================================== 01:04:25.134 [2024-12-09T06:03:19.720Z] Total : 9025.23 35.25 232.11 0.00 13795.02 647.91 22639.71 01:04:25.134 Received shutdown signal, test time was about 15.000000 seconds 01:04:25.134 01:04:25.134 Latency(us) 01:04:25.134 [2024-12-09T06:03:19.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:04:25.134 [2024-12-09T06:03:19.720Z] =================================================================================================================== 01:04:25.134 [2024-12-09T06:03:19.720Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 01:04:25.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87963 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87963 /var/tmp/bdevperf.sock 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87963 ']' 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:25.134 06:03:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:25.134 06:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:25.134 06:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 01:04:25.134 06:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:04:25.134 [2024-12-09 06:03:19.353953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:04:25.134 06:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 01:04:25.134 [2024-12-09 06:03:19.662294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 01:04:25.134 06:03:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:04:25.701 NVMe0n1 01:04:25.701 06:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:04:25.966 01:04:25.966 06:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 01:04:26.225 01:04:26.225 06:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:26.225 06:03:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 01:04:26.790 06:03:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:26.790 06:03:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 01:04:30.067 06:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:30.067 06:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 01:04:30.325 06:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88092 01:04:30.325 06:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:04:30.325 06:03:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88092 01:04:31.260 { 01:04:31.260 "results": [ 01:04:31.260 { 01:04:31.260 "job": "NVMe0n1", 01:04:31.260 "core_mask": "0x1", 01:04:31.260 "workload": "verify", 01:04:31.260 "status": "finished", 01:04:31.260 "verify_range": { 01:04:31.260 "start": 0, 01:04:31.260 "length": 16384 01:04:31.260 }, 01:04:31.260 "queue_depth": 128, 01:04:31.260 "io_size": 4096, 01:04:31.260 "runtime": 1.007101, 01:04:31.260 "iops": 8705.184484972213, 01:04:31.260 "mibps": 34.00462689442271, 01:04:31.260 "io_failed": 0, 01:04:31.260 "io_timeout": 0, 01:04:31.260 "avg_latency_us": 14611.521907981376, 01:04:31.260 "min_latency_us": 1951.1854545454546, 01:04:31.260 "max_latency_us": 14596.654545454545 01:04:31.260 } 01:04:31.260 ], 01:04:31.260 "core_count": 1 01:04:31.260 } 01:04:31.260 06:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:04:31.260 [2024-12-09 06:03:18.834602] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:04:31.260 [2024-12-09 06:03:18.834754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87963 ] 01:04:31.260 [2024-12-09 06:03:18.986818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:31.260 [2024-12-09 06:03:19.039927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:04:31.260 [2024-12-09 06:03:21.312592] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 01:04:31.260 [2024-12-09 06:03:21.313298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:31.260 [2024-12-09 06:03:21.313436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:31.260 [2024-12-09 06:03:21.313544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:31.260 [2024-12-09 06:03:21.313672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:31.260 [2024-12-09 06:03:21.313791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:31.260 [2024-12-09 06:03:21.313887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:31.260 [2024-12-09 06:03:21.313970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:31.260 [2024-12-09 06:03:21.314060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:31.260 [2024-12-09 06:03:21.314146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 01:04:31.260 [2024-12-09 06:03:21.314317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 01:04:31.260 [2024-12-09 06:03:21.314447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c5a00 (9): Bad file descriptor 01:04:31.260 [2024-12-09 06:03:21.323092] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 01:04:31.260 Running I/O for 1 seconds... 01:04:31.260 8639.00 IOPS, 33.75 MiB/s 01:04:31.260 Latency(us) 01:04:31.260 [2024-12-09T06:03:25.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:04:31.260 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 01:04:31.260 Verification LBA range: start 0x0 length 0x4000 01:04:31.260 NVMe0n1 : 1.01 8705.18 34.00 0.00 0.00 14611.52 1951.19 14596.65 01:04:31.260 [2024-12-09T06:03:25.846Z] =================================================================================================================== 01:04:31.260 [2024-12-09T06:03:25.846Z] Total : 8705.18 34.00 0.00 0.00 14611.52 1951.19 14596.65 01:04:31.260 06:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:31.260 06:03:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 01:04:31.842 06:03:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:31.842 06:03:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:32.099 06:03:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 01:04:32.357 06:03:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:04:32.615 06:03:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 01:04:35.900 06:03:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:04:35.900 06:03:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 87963 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87963 ']' 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87963 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87963 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:04:35.900 killing process with pid 87963 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87963' 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87963 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87963 01:04:35.900 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 01:04:36.159 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:36.418 rmmod nvme_tcp 01:04:36.418 rmmod nvme_fabrics 01:04:36.418 rmmod nvme_keyring 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 87626 ']' 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 87626 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87626 ']' 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87626 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87626 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:04:36.418 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:04:36.419 killing process with pid 87626 01:04:36.419 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87626' 01:04:36.419 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87626 01:04:36.419 06:03:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87626 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:04:36.677 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:36.678 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 01:04:36.937 01:04:36.937 real 0m32.160s 01:04:36.937 user 2m5.310s 01:04:36.937 sys 0m4.475s 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 01:04:36.937 ************************************ 01:04:36.937 END TEST nvmf_failover 01:04:36.937 ************************************ 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:04:36.937 ************************************ 01:04:36.937 START TEST nvmf_host_discovery 01:04:36.937 ************************************ 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 01:04:36.937 * Looking for test storage... 01:04:36.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 01:04:36.937 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:37.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:37.197 --rc genhtml_branch_coverage=1 01:04:37.197 --rc genhtml_function_coverage=1 01:04:37.197 --rc genhtml_legend=1 01:04:37.197 --rc geninfo_all_blocks=1 01:04:37.197 --rc geninfo_unexecuted_blocks=1 01:04:37.197 01:04:37.197 ' 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:37.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:37.197 --rc genhtml_branch_coverage=1 01:04:37.197 --rc genhtml_function_coverage=1 01:04:37.197 --rc genhtml_legend=1 01:04:37.197 --rc geninfo_all_blocks=1 01:04:37.197 --rc geninfo_unexecuted_blocks=1 01:04:37.197 01:04:37.197 ' 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:37.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:37.197 --rc genhtml_branch_coverage=1 01:04:37.197 --rc genhtml_function_coverage=1 01:04:37.197 --rc genhtml_legend=1 01:04:37.197 --rc geninfo_all_blocks=1 01:04:37.197 --rc geninfo_unexecuted_blocks=1 01:04:37.197 01:04:37.197 ' 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:37.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:37.197 --rc genhtml_branch_coverage=1 01:04:37.197 --rc genhtml_function_coverage=1 01:04:37.197 --rc genhtml_legend=1 01:04:37.197 --rc geninfo_all_blocks=1 01:04:37.197 --rc geninfo_unexecuted_blocks=1 01:04:37.197 01:04:37.197 ' 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:37.197 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:04:37.198 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:04:37.198 Cannot find device "nvmf_init_br" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:04:37.198 Cannot find device "nvmf_init_br2" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:04:37.198 Cannot find device "nvmf_tgt_br" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:04:37.198 Cannot find device "nvmf_tgt_br2" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:04:37.198 Cannot find device "nvmf_init_br" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:04:37.198 Cannot find device "nvmf_init_br2" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:04:37.198 Cannot find device "nvmf_tgt_br" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:04:37.198 Cannot find device "nvmf_tgt_br2" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:04:37.198 Cannot find device "nvmf_br" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:04:37.198 Cannot find device "nvmf_init_if" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:04:37.198 Cannot find device "nvmf_init_if2" 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:37.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:37.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:04:37.198 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:04:37.457 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:04:37.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:04:37.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 01:04:37.458 01:04:37.458 --- 10.0.0.3 ping statistics --- 01:04:37.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:37.458 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:04:37.458 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:04:37.458 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 01:04:37.458 01:04:37.458 --- 10.0.0.4 ping statistics --- 01:04:37.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:37.458 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:04:37.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:37.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 01:04:37.458 01:04:37.458 --- 10.0.0.1 ping statistics --- 01:04:37.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:37.458 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:04:37.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:37.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 01:04:37.458 01:04:37.458 --- 10.0.0.2 ping statistics --- 01:04:37.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:37.458 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=88457 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 88457 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88457 ']' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:37.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:37.458 06:03:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.458 [2024-12-09 06:03:32.023909] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:04:37.458 [2024-12-09 06:03:32.024018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:37.716 [2024-12-09 06:03:32.175685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:37.716 [2024-12-09 06:03:32.214067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:37.716 [2024-12-09 06:03:32.214138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:37.716 [2024-12-09 06:03:32.214162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:37.716 [2024-12-09 06:03:32.214172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:37.716 [2024-12-09 06:03:32.214180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:37.716 [2024-12-09 06:03:32.214571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.975 [2024-12-09 06:03:32.348845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.975 [2024-12-09 06:03:32.357012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.975 null0 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.975 null1 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88494 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88494 /tmp/host.sock 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88494 ']' 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:37.975 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:37.975 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:37.975 [2024-12-09 06:03:32.448136] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:04:37.975 [2024-12-09 06:03:32.448238] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88494 ] 01:04:38.234 [2024-12-09 06:03:32.600335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:04:38.234 [2024-12-09 06:03:32.641003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:38.234 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:38.493 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.494 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.494 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 01:04:38.494 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:04:38.494 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.494 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.494 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.494 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 01:04:38.494 06:03:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:38.494 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.752 [2024-12-09 06:03:33.109207] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:04:38.752 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 01:04:38.753 06:03:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:04:39.319 [2024-12-09 06:03:33.762356] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:04:39.319 [2024-12-09 06:03:33.762409] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:04:39.319 [2024-12-09 06:03:33.762428] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:04:39.319 [2024-12-09 06:03:33.848532] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 01:04:39.319 [2024-12-09 06:03:33.903083] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:04:39.319 [2024-12-09 06:03:33.904077] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xec3580:1 started. 01:04:39.577 [2024-12-09 06:03:33.905994] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:04:39.577 [2024-12-09 06:03:33.906052] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:04:39.577 [2024-12-09 06:03:33.910598] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xec3580 was disconnected and freed. delete nvme_qpair. 01:04:39.835 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:39.835 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:04:39.835 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:04:39.835 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:39.835 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:39.835 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:39.836 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.094 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.094 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 01:04:40.094 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:40.094 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:04:40.094 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 01:04:40.094 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:40.094 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:40.094 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:40.095 [2024-12-09 06:03:34.584837] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xec3920:1 started. 01:04:40.095 [2024-12-09 06:03:34.590887] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xec3920 was disconnected and freed. delete nvme_qpair. 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.095 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.353 [2024-12-09 06:03:34.694277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:04:40.353 [2024-12-09 06:03:34.695454] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:04:40.353 [2024-12-09 06:03:34.695667] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:40.353 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:40.354 [2024-12-09 06:03:34.781122] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:40.354 [2024-12-09 06:03:34.845623] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 01:04:40.354 [2024-12-09 06:03:34.845711] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:04:40.354 [2024-12-09 06:03:34.845726] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:04:40.354 [2024-12-09 06:03:34.845733] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 01:04:40.354 06:03:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:04:41.730 06:03:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.730 [2024-12-09 06:03:36.012240] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:04:41.730 [2024-12-09 06:03:36.012295] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:04:41.730 [2024-12-09 06:03:36.013374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:04:41.730 [2024-12-09 06:03:36.013413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:41.730 [2024-12-09 06:03:36.013441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:04:41.730 [2024-12-09 06:03:36.013450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:41.730 [2024-12-09 06:03:36.013459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:04:41.730 [2024-12-09 06:03:36.013468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:41.730 [2024-12-09 06:03:36.013479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:04:41.730 [2024-12-09 06:03:36.013487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:04:41.730 [2024-12-09 06:03:36.013496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b850 is same with the state(6) to be set 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:41.730 [2024-12-09 06:03:36.023306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b850 (9): Bad file descriptor 01:04:41.730 [2024-12-09 06:03:36.033346] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:04:41.730 [2024-12-09 06:03:36.033375] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:04:41.730 [2024-12-09 06:03:36.033383] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:04:41.730 [2024-12-09 06:03:36.033390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:04:41.730 [2024-12-09 06:03:36.033424] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:04:41.730 [2024-12-09 06:03:36.033524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:41.730 [2024-12-09 06:03:36.033558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b850 with addr=10.0.0.3, port=4420 01:04:41.730 [2024-12-09 06:03:36.033571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b850 is same with the state(6) to be set 01:04:41.730 [2024-12-09 06:03:36.033589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b850 (9): Bad file descriptor 01:04:41.730 [2024-12-09 06:03:36.033611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:04:41.730 [2024-12-09 06:03:36.033629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:04:41.730 [2024-12-09 06:03:36.033661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:04:41.730 [2024-12-09 06:03:36.033675] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:04:41.730 [2024-12-09 06:03:36.033682] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:04:41.730 [2024-12-09 06:03:36.033687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:04:41.730 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.730 [2024-12-09 06:03:36.043436] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:04:41.730 [2024-12-09 06:03:36.043466] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:04:41.730 [2024-12-09 06:03:36.043473] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:04:41.730 [2024-12-09 06:03:36.043479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:04:41.730 [2024-12-09 06:03:36.043508] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:04:41.730 [2024-12-09 06:03:36.043580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:41.730 [2024-12-09 06:03:36.043602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b850 with addr=10.0.0.3, port=4420 01:04:41.731 [2024-12-09 06:03:36.043614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b850 is same with the state(6) to be set 01:04:41.731 [2024-12-09 06:03:36.043631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b850 (9): Bad file descriptor 01:04:41.731 [2024-12-09 06:03:36.043663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:04:41.731 [2024-12-09 06:03:36.043682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:04:41.731 [2024-12-09 06:03:36.043698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:04:41.731 [2024-12-09 06:03:36.043713] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:04:41.731 [2024-12-09 06:03:36.043723] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:04:41.731 [2024-12-09 06:03:36.043731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:04:41.731 [2024-12-09 06:03:36.053519] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:04:41.731 [2024-12-09 06:03:36.053548] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:04:41.731 [2024-12-09 06:03:36.053555] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.053561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:04:41.731 [2024-12-09 06:03:36.053589] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.053668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:41.731 [2024-12-09 06:03:36.053691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b850 with addr=10.0.0.3, port=4420 01:04:41.731 [2024-12-09 06:03:36.053703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b850 is same with the state(6) to be set 01:04:41.731 [2024-12-09 06:03:36.053720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b850 (9): Bad file descriptor 01:04:41.731 [2024-12-09 06:03:36.053735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:04:41.731 [2024-12-09 06:03:36.053744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:04:41.731 [2024-12-09 06:03:36.053756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:04:41.731 [2024-12-09 06:03:36.053770] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:04:41.731 [2024-12-09 06:03:36.053780] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:04:41.731 [2024-12-09 06:03:36.053788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:04:41.731 [2024-12-09 06:03:36.063601] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:04:41.731 [2024-12-09 06:03:36.063633] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:04:41.731 [2024-12-09 06:03:36.063640] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.063658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:04:41.731 [2024-12-09 06:03:36.063689] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.063750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:41.731 [2024-12-09 06:03:36.063773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b850 with addr=10.0.0.3, port=4420 01:04:41.731 [2024-12-09 06:03:36.063784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b850 is same with the state(6) to be set 01:04:41.731 [2024-12-09 06:03:36.063801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b850 (9): Bad file descriptor 01:04:41.731 [2024-12-09 06:03:36.063816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:04:41.731 [2024-12-09 06:03:36.063825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:04:41.731 [2024-12-09 06:03:36.063835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:04:41.731 [2024-12-09 06:03:36.063843] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:04:41.731 [2024-12-09 06:03:36.063849] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:04:41.731 [2024-12-09 06:03:36.063856] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 01:04:41.731 [2024-12-09 06:03:36.073700] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:04:41.731 [2024-12-09 06:03:36.073737] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:04:41.731 [2024-12-09 06:03:36.073745] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.073750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:04:41.731 [2024-12-09 06:03:36.073781] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:41.731 [2024-12-09 06:03:36.073855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:41.731 [2024-12-09 06:03:36.073887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b850 with addr=10.0.0.3, port=4420 01:04:41.731 [2024-12-09 06:03:36.073907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b850 is same with the state(6) to be set 01:04:41.731 [2024-12-09 06:03:36.073926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b850 (9): Bad file descriptor 01:04:41.731 [2024-12-09 06:03:36.073941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:04:41.731 [2024-12-09 06:03:36.073950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:04:41.731 [2024-12-09 06:03:36.073960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:04:41.731 [2024-12-09 06:03:36.073969] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:04:41.731 [2024-12-09 06:03:36.073977] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:04:41.731 [2024-12-09 06:03:36.073982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.731 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:41.731 [2024-12-09 06:03:36.083836] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:04:41.731 [2024-12-09 06:03:36.083868] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:04:41.731 [2024-12-09 06:03:36.083875] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.083881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:04:41.731 [2024-12-09 06:03:36.083910] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.083975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:41.731 [2024-12-09 06:03:36.084013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b850 with addr=10.0.0.3, port=4420 01:04:41.731 [2024-12-09 06:03:36.084039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b850 is same with the state(6) to be set 01:04:41.731 [2024-12-09 06:03:36.084059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b850 (9): Bad file descriptor 01:04:41.731 [2024-12-09 06:03:36.084098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:04:41.731 [2024-12-09 06:03:36.084114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:04:41.731 [2024-12-09 06:03:36.084129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:04:41.731 [2024-12-09 06:03:36.084138] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:04:41.731 [2024-12-09 06:03:36.084144] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:04:41.731 [2024-12-09 06:03:36.084149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:04:41.731 [2024-12-09 06:03:36.093920] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:04:41.731 [2024-12-09 06:03:36.093977] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:04:41.731 [2024-12-09 06:03:36.094003] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.094008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:04:41.731 [2024-12-09 06:03:36.094035] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:04:41.731 [2024-12-09 06:03:36.094090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:41.732 [2024-12-09 06:03:36.094110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3b850 with addr=10.0.0.3, port=4420 01:04:41.732 [2024-12-09 06:03:36.094120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3b850 is same with the state(6) to be set 01:04:41.732 [2024-12-09 06:03:36.094155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3b850 (9): Bad file descriptor 01:04:41.732 [2024-12-09 06:03:36.094178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:04:41.732 [2024-12-09 06:03:36.094194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:04:41.732 [2024-12-09 06:03:36.094206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:04:41.732 [2024-12-09 06:03:36.094215] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:04:41.732 [2024-12-09 06:03:36.094220] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:04:41.732 [2024-12-09 06:03:36.094225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:04:41.732 [2024-12-09 06:03:36.097854] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 01:04:41.732 [2024-12-09 06:03:36.097888] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 01:04:41.732 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:41.990 06:03:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:42.942 [2024-12-09 06:03:37.452822] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:04:42.942 [2024-12-09 06:03:37.452854] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:04:42.942 [2024-12-09 06:03:37.452874] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:04:43.201 [2024-12-09 06:03:37.538942] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 01:04:43.201 [2024-12-09 06:03:37.597494] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 01:04:43.201 [2024-12-09 06:03:37.598178] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xec2bf0:1 started. 01:04:43.201 [2024-12-09 06:03:37.600243] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:04:43.201 [2024-12-09 06:03:37.600292] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:43.201 [2024-12-09 06:03:37.601605] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xec2bf0 was disconnected and freed. delete nvme_qpair. 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:43.201 2024/12/09 06:03:37 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 01:04:43.201 request: 01:04:43.201 { 01:04:43.201 "method": "bdev_nvme_start_discovery", 01:04:43.201 "params": { 01:04:43.201 "name": "nvme", 01:04:43.201 "trtype": "tcp", 01:04:43.201 "traddr": "10.0.0.3", 01:04:43.201 "adrfam": "ipv4", 01:04:43.201 "trsvcid": "8009", 01:04:43.201 "hostnqn": "nqn.2021-12.io.spdk:test", 01:04:43.201 "wait_for_attach": true 01:04:43.201 } 01:04:43.201 } 01:04:43.201 Got JSON-RPC error response 01:04:43.201 GoRPCClient: error on JSON-RPC call 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:43.201 2024/12/09 06:03:37 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 01:04:43.201 request: 01:04:43.201 { 01:04:43.201 "method": "bdev_nvme_start_discovery", 01:04:43.201 "params": { 01:04:43.201 "name": "nvme_second", 01:04:43.201 "trtype": "tcp", 01:04:43.201 "traddr": "10.0.0.3", 01:04:43.201 "adrfam": "ipv4", 01:04:43.201 "trsvcid": "8009", 01:04:43.201 "hostnqn": "nqn.2021-12.io.spdk:test", 01:04:43.201 "wait_for_attach": true 01:04:43.201 } 01:04:43.201 } 01:04:43.201 Got JSON-RPC error response 01:04:43.201 GoRPCClient: error on JSON-RPC call 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:04:43.201 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:43.460 06:03:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:44.396 [2024-12-09 06:03:38.892698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:44.396 [2024-12-09 06:03:38.892773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec1390 with addr=10.0.0.3, port=8010 01:04:44.396 [2024-12-09 06:03:38.892798] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:04:44.396 [2024-12-09 06:03:38.892808] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:04:44.396 [2024-12-09 06:03:38.892818] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 01:04:45.333 [2024-12-09 06:03:39.892708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:04:45.333 [2024-12-09 06:03:39.892782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec1390 with addr=10.0.0.3, port=8010 01:04:45.333 [2024-12-09 06:03:39.892805] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:04:45.333 [2024-12-09 06:03:39.892815] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:04:45.333 [2024-12-09 06:03:39.892825] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 01:04:46.710 [2024-12-09 06:03:40.892521] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 01:04:46.710 2024/12/09 06:03:40 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 01:04:46.710 request: 01:04:46.710 { 01:04:46.710 "method": "bdev_nvme_start_discovery", 01:04:46.710 "params": { 01:04:46.710 "name": "nvme_second", 01:04:46.710 "trtype": "tcp", 01:04:46.710 "traddr": "10.0.0.3", 01:04:46.710 "adrfam": "ipv4", 01:04:46.710 "trsvcid": "8010", 01:04:46.710 "hostnqn": "nqn.2021-12.io.spdk:test", 01:04:46.710 "wait_for_attach": false, 01:04:46.710 "attach_timeout_ms": 3000 01:04:46.710 } 01:04:46.710 } 01:04:46.710 Got JSON-RPC error response 01:04:46.710 GoRPCClient: error on JSON-RPC call 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88494 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 01:04:46.710 06:03:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:04:46.710 rmmod nvme_tcp 01:04:46.710 rmmod nvme_fabrics 01:04:46.710 rmmod nvme_keyring 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 88457 ']' 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 88457 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 88457 ']' 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 88457 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88457 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:04:46.710 killing process with pid 88457 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88457' 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 88457 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 88457 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:04:46.710 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 01:04:46.711 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:04:46.711 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:04:46.711 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:04:46.711 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 01:04:46.976 01:04:46.976 real 0m10.145s 01:04:46.976 user 0m19.902s 01:04:46.976 sys 0m1.583s 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 01:04:46.976 ************************************ 01:04:46.976 END TEST nvmf_host_discovery 01:04:46.976 ************************************ 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:04:46.976 ************************************ 01:04:46.976 START TEST nvmf_host_multipath_status 01:04:46.976 ************************************ 01:04:46.976 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 01:04:47.286 * Looking for test storage... 01:04:47.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:04:47.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:47.286 --rc genhtml_branch_coverage=1 01:04:47.286 --rc genhtml_function_coverage=1 01:04:47.286 --rc genhtml_legend=1 01:04:47.286 --rc geninfo_all_blocks=1 01:04:47.286 --rc geninfo_unexecuted_blocks=1 01:04:47.286 01:04:47.286 ' 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:04:47.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:47.286 --rc genhtml_branch_coverage=1 01:04:47.286 --rc genhtml_function_coverage=1 01:04:47.286 --rc genhtml_legend=1 01:04:47.286 --rc geninfo_all_blocks=1 01:04:47.286 --rc geninfo_unexecuted_blocks=1 01:04:47.286 01:04:47.286 ' 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:04:47.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:47.286 --rc genhtml_branch_coverage=1 01:04:47.286 --rc genhtml_function_coverage=1 01:04:47.286 --rc genhtml_legend=1 01:04:47.286 --rc geninfo_all_blocks=1 01:04:47.286 --rc geninfo_unexecuted_blocks=1 01:04:47.286 01:04:47.286 ' 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:04:47.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:04:47.286 --rc genhtml_branch_coverage=1 01:04:47.286 --rc genhtml_function_coverage=1 01:04:47.286 --rc genhtml_legend=1 01:04:47.286 --rc geninfo_all_blocks=1 01:04:47.286 --rc geninfo_unexecuted_blocks=1 01:04:47.286 01:04:47.286 ' 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:04:47.286 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:04:47.287 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:04:47.287 Cannot find device "nvmf_init_br" 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:04:47.287 Cannot find device "nvmf_init_br2" 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:04:47.287 Cannot find device "nvmf_tgt_br" 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:04:47.287 Cannot find device "nvmf_tgt_br2" 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:04:47.287 Cannot find device "nvmf_init_br" 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:04:47.287 Cannot find device "nvmf_init_br2" 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 01:04:47.287 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:04:47.571 Cannot find device "nvmf_tgt_br" 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:04:47.571 Cannot find device "nvmf_tgt_br2" 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:04:47.571 Cannot find device "nvmf_br" 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:04:47.571 Cannot find device "nvmf_init_if" 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:04:47.571 Cannot find device "nvmf_init_if2" 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:04:47.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:04:47.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:04:47.571 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:04:47.572 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:04:47.572 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:04:47.572 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:04:47.572 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:04:47.572 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:04:47.572 06:03:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:04:47.572 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:04:47.572 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 01:04:47.572 01:04:47.572 --- 10.0.0.3 ping statistics --- 01:04:47.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:47.572 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:04:47.572 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:04:47.572 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 01:04:47.572 01:04:47.572 --- 10.0.0.4 ping statistics --- 01:04:47.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:47.572 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:04:47.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:04:47.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:04:47.572 01:04:47.572 --- 10.0.0.1 ping statistics --- 01:04:47.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:47.572 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:04:47.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:04:47.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 01:04:47.572 01:04:47.572 --- 10.0.0.2 ping statistics --- 01:04:47.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:04:47.572 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:04:47.572 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=89009 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 89009 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89009 ']' 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:47.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:47.831 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:04:47.831 [2024-12-09 06:03:42.247402] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:04:47.831 [2024-12-09 06:03:42.247499] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:04:47.831 [2024-12-09 06:03:42.395059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:04:48.089 [2024-12-09 06:03:42.432439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:04:48.089 [2024-12-09 06:03:42.432493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:04:48.089 [2024-12-09 06:03:42.432507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:04:48.089 [2024-12-09 06:03:42.432517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:04:48.089 [2024-12-09 06:03:42.432526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:04:48.089 [2024-12-09 06:03:42.433377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:04:48.089 [2024-12-09 06:03:42.433393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:04:48.089 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:48.089 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:04:48.090 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:04:48.090 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 01:04:48.090 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:04:48.090 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:04:48.090 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89009 01:04:48.090 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:04:48.347 [2024-12-09 06:03:42.884121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:04:48.347 06:03:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:04:48.915 Malloc0 01:04:48.915 06:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:04:49.173 06:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:04:49.431 06:03:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:04:49.689 [2024-12-09 06:03:44.042890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:04:49.689 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:04:49.947 [2024-12-09 06:03:44.379566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89099 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89099 /var/tmp/bdevperf.sock 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89099 ']' 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 01:04:49.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 01:04:49.947 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:04:50.204 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:04:50.204 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 01:04:50.204 06:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:04:50.460 06:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:04:51.034 Nvme0n1 01:04:51.034 06:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:04:51.291 Nvme0n1 01:04:51.291 06:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:04:51.291 06:03:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 01:04:53.819 06:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 01:04:53.819 06:03:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:04:53.819 06:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:04:54.078 06:03:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 01:04:55.013 06:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 01:04:55.013 06:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:04:55.013 06:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:04:55.013 06:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:55.271 06:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:04:55.271 06:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:04:55.271 06:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:04:55.271 06:03:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:55.529 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:04:55.529 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:04:55.529 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:55.529 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:04:55.788 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:04:55.788 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:04:55.788 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:04:55.788 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:56.355 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:04:56.355 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:04:56.355 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:56.355 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:04:56.355 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:04:56.355 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:04:56.355 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:04:56.355 06:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:56.614 06:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:04:56.614 06:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 01:04:56.614 06:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:04:57.181 06:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:04:57.519 06:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 01:04:58.457 06:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 01:04:58.457 06:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:04:58.457 06:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:04:58.457 06:03:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:58.715 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:04:58.715 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:04:58.715 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:58.715 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:04:58.974 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:04:58.974 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:04:58.974 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:58.974 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:04:59.543 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:04:59.543 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:04:59.543 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:04:59.543 06:03:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:59.543 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:04:59.543 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:04:59.543 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:04:59.543 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:00.133 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:00.133 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:00.133 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:00.133 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:00.392 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:00.392 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 01:05:00.392 06:03:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:05:00.666 06:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 01:05:00.928 06:03:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 01:05:01.860 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 01:05:01.860 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:01.860 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:01.860 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:02.117 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:02.117 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:02.117 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:02.117 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:02.375 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:02.375 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:02.375 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:02.375 06:03:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:02.941 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:02.941 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:02.941 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:02.941 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:03.199 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:03.199 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:03.199 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:03.199 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:03.457 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:03.457 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:03.457 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:03.457 06:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:03.715 06:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:03.715 06:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 01:05:03.715 06:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:05:04.279 06:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:05:04.279 06:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 01:05:05.654 06:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 01:05:05.654 06:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:05.654 06:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:05.654 06:03:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:05.654 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:05.654 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:05.654 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:05.654 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:06.222 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:06.222 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:06.222 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:06.222 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:06.481 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:06.481 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:06.481 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:06.481 06:04:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:06.740 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:06.740 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:06.740 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:06.740 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:06.998 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:06.998 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:05:06.998 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:06.998 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:07.256 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:07.256 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 01:05:07.256 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:05:07.514 06:04:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:05:07.772 06:04:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 01:05:08.708 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 01:05:08.708 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:05:08.708 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:08.708 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:09.275 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:09.275 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:09.275 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:09.275 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:09.534 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:09.534 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:09.534 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:09.534 06:04:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:09.793 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:09.793 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:09.793 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:09.793 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:10.051 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:10.051 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:05:10.051 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:10.051 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:10.310 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:10.310 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:05:10.310 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:10.310 06:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:10.886 06:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:10.886 06:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 01:05:10.886 06:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:05:11.156 06:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:05:11.414 06:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 01:05:12.347 06:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 01:05:12.347 06:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:05:12.347 06:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:12.347 06:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:12.912 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:12.912 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:12.912 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:12.912 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:13.169 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:13.169 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:13.169 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:13.169 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:13.426 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:13.426 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:13.426 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:13.426 06:04:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:13.683 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:13.683 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 01:05:13.683 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:13.683 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:13.967 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:13.967 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:13.967 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:13.967 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:14.531 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:14.531 06:04:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 01:05:14.788 06:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 01:05:14.788 06:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:05:15.045 06:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:05:15.302 06:04:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 01:05:16.235 06:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 01:05:16.235 06:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:16.236 06:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:16.236 06:04:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:16.803 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:16.803 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:16.803 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:16.803 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:17.061 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:17.061 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:17.061 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:17.061 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:17.319 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:17.319 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:17.319 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:17.319 06:04:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:17.578 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:17.578 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:17.578 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:17.578 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:17.836 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:17.836 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:17.836 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:17.836 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:18.403 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:18.403 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 01:05:18.403 06:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:05:18.662 06:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:05:18.920 06:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 01:05:19.856 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 01:05:19.856 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 01:05:19.856 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:19.856 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:20.115 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:20.115 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:20.115 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:20.115 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:20.373 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:20.373 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:20.373 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:20.373 06:04:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:20.941 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:20.941 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:20.941 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:20.941 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:21.199 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:21.199 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:21.199 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:21.199 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:21.457 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:21.457 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:21.457 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:21.457 06:04:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:21.716 06:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:21.716 06:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 01:05:21.716 06:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:05:22.284 06:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 01:05:22.544 06:04:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 01:05:23.498 06:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 01:05:23.498 06:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:23.498 06:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:23.498 06:04:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:23.757 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:23.757 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 01:05:23.757 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:23.757 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:24.015 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:24.015 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:24.015 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:24.015 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:24.273 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:24.273 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:24.273 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:24.273 06:04:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:24.838 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:24.838 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:24.838 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:24.838 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:25.097 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:25.097 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 01:05:25.097 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:25.097 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:25.354 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:25.354 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 01:05:25.354 06:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:05:25.612 06:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:05:25.870 06:04:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 01:05:26.803 06:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 01:05:26.803 06:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 01:05:26.803 06:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:26.803 06:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 01:05:27.367 06:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:27.367 06:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 01:05:27.367 06:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:27.367 06:04:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 01:05:27.624 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:27.624 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 01:05:27.624 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:27.624 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 01:05:27.883 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:27.883 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 01:05:27.883 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:27.883 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 01:05:28.141 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:28.141 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 01:05:28.141 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:28.141 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 01:05:28.399 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 01:05:28.399 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 01:05:28.399 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 01:05:28.400 06:04:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89099 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89099 ']' 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89099 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89099 01:05:28.982 killing process with pid 89099 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89099' 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89099 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89099 01:05:28.982 { 01:05:28.982 "results": [ 01:05:28.982 { 01:05:28.982 "job": "Nvme0n1", 01:05:28.982 "core_mask": "0x4", 01:05:28.982 "workload": "verify", 01:05:28.982 "status": "terminated", 01:05:28.982 "verify_range": { 01:05:28.982 "start": 0, 01:05:28.982 "length": 16384 01:05:28.982 }, 01:05:28.982 "queue_depth": 128, 01:05:28.982 "io_size": 4096, 01:05:28.982 "runtime": 37.451772, 01:05:28.982 "iops": 8380.110826264776, 01:05:28.982 "mibps": 32.73480791509678, 01:05:28.982 "io_failed": 0, 01:05:28.982 "io_timeout": 0, 01:05:28.982 "avg_latency_us": 15243.84184732139, 01:05:28.982 "min_latency_us": 181.52727272727273, 01:05:28.982 "max_latency_us": 4087539.898181818 01:05:28.982 } 01:05:28.982 ], 01:05:28.982 "core_count": 1 01:05:28.982 } 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89099 01:05:28.982 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:05:28.982 [2024-12-09 06:03:44.473731] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:05:28.982 [2024-12-09 06:03:44.473840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89099 ] 01:05:28.982 [2024-12-09 06:03:44.614737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:28.982 [2024-12-09 06:03:44.648001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:05:28.982 Running I/O for 90 seconds... 01:05:28.982 8776.00 IOPS, 34.28 MiB/s [2024-12-09T06:04:23.568Z] 8890.50 IOPS, 34.73 MiB/s [2024-12-09T06:04:23.568Z] 8951.00 IOPS, 34.96 MiB/s [2024-12-09T06:04:23.568Z] 8954.25 IOPS, 34.98 MiB/s [2024-12-09T06:04:23.568Z] 8920.00 IOPS, 34.84 MiB/s [2024-12-09T06:04:23.568Z] 8934.50 IOPS, 34.90 MiB/s [2024-12-09T06:04:23.568Z] 8939.00 IOPS, 34.92 MiB/s [2024-12-09T06:04:23.568Z] 8938.25 IOPS, 34.92 MiB/s [2024-12-09T06:04:23.568Z] 8930.11 IOPS, 34.88 MiB/s [2024-12-09T06:04:23.568Z] 8941.60 IOPS, 34.93 MiB/s [2024-12-09T06:04:23.568Z] 8942.36 IOPS, 34.93 MiB/s [2024-12-09T06:04:23.568Z] 8947.33 IOPS, 34.95 MiB/s [2024-12-09T06:04:23.568Z] 8949.62 IOPS, 34.96 MiB/s [2024-12-09T06:04:23.568Z] 8938.86 IOPS, 34.92 MiB/s [2024-12-09T06:04:23.568Z] 8932.07 IOPS, 34.89 MiB/s [2024-12-09T06:04:23.568Z] 8918.19 IOPS, 34.84 MiB/s [2024-12-09T06:04:23.568Z] [2024-12-09 06:04:01.964167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.982 [2024-12-09 06:04:01.964238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.964969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.964989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.965012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.965028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.965052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.965067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:28.982 [2024-12-09 06:04:01.965089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.982 [2024-12-09 06:04:01.965110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.965699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.983 [2024-12-09 06:04:01.965740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.965763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.983 [2024-12-09 06:04:01.965779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.966741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.983 [2024-12-09 06:04:01.966780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.966827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.966845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.966870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.966887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.966909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.966925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.966947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.966963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.966986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.967002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.967024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.967039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.967062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.967077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.983 [2024-12-09 06:04:01.968908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:05:28.983 [2024-12-09 06:04:01.968943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.968969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.968993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.969973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.969989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.984 [2024-12-09 06:04:01.970215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.984 [2024-12-09 06:04:01.970253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.984 [2024-12-09 06:04:01.970291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.984 [2024-12-09 06:04:01.970330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:05:28.984 [2024-12-09 06:04:01.970483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.984 [2024-12-09 06:04:01.970498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.970520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.985 [2024-12-09 06:04:01.970535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.970557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.970573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.970595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.970611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.970657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.970679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.971890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.971924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.971956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.971975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.971997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.972968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.972984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.973005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.973021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.973043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.973059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.973081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.973096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.973118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.985 [2024-12-09 06:04:01.973142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.973166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.985 [2024-12-09 06:04:01.973182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.973204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.973219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.973242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.985 [2024-12-09 06:04:01.973257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:28.985 [2024-12-09 06:04:01.973279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.973883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.973902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.974982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.974998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.975020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.975035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.975058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.975073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.975096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.975111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.975133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.975149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.975180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.975196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.975218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.975233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.975256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.986 [2024-12-09 06:04:01.975271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:28.986 [2024-12-09 06:04:01.975293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.986 [2024-12-09 06:04:01.975309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.975979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.975994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.987 [2024-12-09 06:04:01.976900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:05:28.987 [2024-12-09 06:04:01.976922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.976937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.976968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.976984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.977007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.977032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.977054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.977070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.977092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.977108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.977130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.977145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.977167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.977183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.977205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.977221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.977243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.977259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.988 [2024-12-09 06:04:01.978381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.988 [2024-12-09 06:04:01.978422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.988 [2024-12-09 06:04:01.978461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.988 [2024-12-09 06:04:01.978499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.988 [2024-12-09 06:04:01.978792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.978961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.978985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:05:28.988 [2024-12-09 06:04:01.979459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.988 [2024-12-09 06:04:01.979475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.979969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.979991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.989 [2024-12-09 06:04:01.980156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.989 [2024-12-09 06:04:01.980194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.980735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.980751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.981540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.981569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.981598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.981615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.981637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.981672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.981697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.981730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.981756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.981772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:28.989 [2024-12-09 06:04:01.981794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.989 [2024-12-09 06:04:01.981810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.981833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.981849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.981870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.981886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.981908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.981945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.981961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.981983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.981998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.990 [2024-12-09 06:04:01.982270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.982684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.982735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.990901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.990995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.990 [2024-12-09 06:04:01.991616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:05:28.990 [2024-12-09 06:04:01.991638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.991685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.991715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.991733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.991756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.991772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.991795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.991811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.991834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.991850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.991874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.991889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.991911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.991927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.991949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.991965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.991987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.992457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.992474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.993501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.993551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.993590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.993628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.993688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.991 [2024-12-09 06:04:01.993728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.991 [2024-12-09 06:04:01.993766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.991 [2024-12-09 06:04:01.993804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.991 [2024-12-09 06:04:01.993842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.993879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.993916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.993958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:05:28.991 [2024-12-09 06:04:01.993994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.991 [2024-12-09 06:04:01.994011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.992 [2024-12-09 06:04:01.994049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.994973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.994995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.992 [2024-12-09 06:04:01.995477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.992 [2024-12-09 06:04:01.995514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:28.992 [2024-12-09 06:04:01.995624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.992 [2024-12-09 06:04:01.995639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.995983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.995998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.996030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.996047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.996842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.996872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.996900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.996918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.996941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.996958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.996980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.996995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.993 [2024-12-09 06:04:01.997575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.993 [2024-12-09 06:04:01.997944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:28.993 [2024-12-09 06:04:01.997966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.997982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.998975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.998997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:01.999509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:01.999524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:02.000387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:02.000419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:05:28.994 [2024-12-09 06:04:02.000448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.994 [2024-12-09 06:04:02.000466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.000509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.000562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.000600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.000667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.995 [2024-12-09 06:04:02.000742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.995 [2024-12-09 06:04:02.000799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.995 [2024-12-09 06:04:02.000838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.995 [2024-12-09 06:04:02.000876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.000914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.000952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.000974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.000989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.995 [2024-12-09 06:04:02.001063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.995 [2024-12-09 06:04:02.001886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:05:28.995 [2024-12-09 06:04:02.001907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.001923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.001944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.001960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.001981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.001997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.996 [2024-12-09 06:04:02.002381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.996 [2024-12-09 06:04:02.002419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.002898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.002914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.003733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.003765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.003793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.003812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.003834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.003850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.003872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.003887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.003909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.003924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.003946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.003961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.003983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.003999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.004035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.004051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.004073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.004089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.004111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.004126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.004148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.004163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.004185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.004200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.004222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.996 [2024-12-09 06:04:02.004237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:28.996 [2024-12-09 06:04:02.004258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.997 [2024-12-09 06:04:02.004506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.004959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.004984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:05:28.997 [2024-12-09 06:04:02.005743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.997 [2024-12-09 06:04:02.005759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.005781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.005796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.005818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.005834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.005856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.005871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.005892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.005908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.005939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.005955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.005976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.005992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.006014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.006029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.006051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.006066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.006088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.006103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.006125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.006140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.006162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.006177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.006199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.006214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.006237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.006253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.998 [2024-12-09 06:04:02.007425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.998 [2024-12-09 06:04:02.007462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.998 [2024-12-09 06:04:02.007499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.998 [2024-12-09 06:04:02.007536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.007937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.007960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.998 [2024-12-09 06:04:02.008000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.998 [2024-12-09 06:04:02.008342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:05:28.998 [2024-12-09 06:04:02.008370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.008953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.008992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.999 [2024-12-09 06:04:02.009343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:28.999 [2024-12-09 06:04:02.009380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.009827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.009843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:28.999 [2024-12-09 06:04:02.010674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:28.999 [2024-12-09 06:04:02.010707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.010737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.010756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.010779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.010809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.010834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.010872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.010887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.010909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.010924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.010946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.010962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.010984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.000 [2024-12-09 06:04:02.011498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.011975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.011999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.012014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:29.000 [2024-12-09 06:04:02.012037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.000 [2024-12-09 06:04:02.012052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.012967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.012982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.013004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.013020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.013042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.013057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.013079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.013096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.013118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.013133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.013155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.013178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.013201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.013218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.014090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.014121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.014149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.014167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.014190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.014206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.014228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.014243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.014265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.014281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.014303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.014318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.014340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.014355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:05:29.001 [2024-12-09 06:04:02.014378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.001 [2024-12-09 06:04:02.014393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.002 [2024-12-09 06:04:02.014431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.002 [2024-12-09 06:04:02.014468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.002 [2024-12-09 06:04:02.014505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.002 [2024-12-09 06:04:02.014557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.002 [2024-12-09 06:04:02.014785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.014972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.014994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.015881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.015903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.022907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:05:29.002 [2024-12-09 06:04:02.023007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.002 [2024-12-09 06:04:02.023038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.003 [2024-12-09 06:04:02.023234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.003 [2024-12-09 06:04:02.023272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.023649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.023683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.024980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.024996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.025022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.003 [2024-12-09 06:04:02.025037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:29.003 [2024-12-09 06:04:02.025063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.003 [2024-12-09 06:04:02.025089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.025978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.025994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.004 [2024-12-09 06:04:02.026551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:05:29.004 [2024-12-09 06:04:02.026577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.026592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:02.026618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.026661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:02.026703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.026721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:02.026747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.026763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:02.026790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.026806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:02.026832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.026848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:02.026874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.026890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:02.026923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.026939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:02.027117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:02.027140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:05:29.005 8416.12 IOPS, 32.88 MiB/s [2024-12-09T06:04:23.591Z] 7948.56 IOPS, 31.05 MiB/s [2024-12-09T06:04:23.591Z] 7530.21 IOPS, 29.41 MiB/s [2024-12-09T06:04:23.591Z] 7153.70 IOPS, 27.94 MiB/s [2024-12-09T06:04:23.591Z] 7187.95 IOPS, 28.08 MiB/s [2024-12-09T06:04:23.591Z] 7269.95 IOPS, 28.40 MiB/s [2024-12-09T06:04:23.591Z] 7337.30 IOPS, 28.66 MiB/s [2024-12-09T06:04:23.591Z] 7439.29 IOPS, 29.06 MiB/s [2024-12-09T06:04:23.591Z] 7602.32 IOPS, 29.70 MiB/s [2024-12-09T06:04:23.591Z] 7745.35 IOPS, 30.26 MiB/s [2024-12-09T06:04:23.591Z] 7869.22 IOPS, 30.74 MiB/s [2024-12-09T06:04:23.591Z] 7913.21 IOPS, 30.91 MiB/s [2024-12-09T06:04:23.591Z] 7946.07 IOPS, 31.04 MiB/s [2024-12-09T06:04:23.591Z] 7973.77 IOPS, 31.15 MiB/s [2024-12-09T06:04:23.591Z] 8003.19 IOPS, 31.26 MiB/s [2024-12-09T06:04:23.591Z] 8094.84 IOPS, 31.62 MiB/s [2024-12-09T06:04:23.591Z] 8201.82 IOPS, 32.04 MiB/s [2024-12-09T06:04:23.591Z] 8294.76 IOPS, 32.40 MiB/s [2024-12-09T06:04:23.591Z] [2024-12-09 06:04:20.345278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.345816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.345852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.345888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.345972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.345993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.346009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.346058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.346095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.346132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.346169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.346206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.346243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.346279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.005 [2024-12-09 06:04:20.346316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.346353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.346389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:05:29.005 [2024-12-09 06:04:20.346411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.005 [2024-12-09 06:04:20.346427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.347443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.347504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.347542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.347580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.347618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.347676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.347716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.347754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.347791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.347829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.347866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.347903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.347940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.347972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.347988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.348025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.348062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.348099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.348136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.348172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.348216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.348253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.348290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.348329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.348366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.348403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.348426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.348449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.349081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.349127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.349164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.349201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.006 [2024-12-09 06:04:20.349238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.349275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.349322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.349360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.349396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.349433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.349469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.006 [2024-12-09 06:04:20.349519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:05:29.006 [2024-12-09 06:04:20.349543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.007 [2024-12-09 06:04:20.349559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:05:29.007 [2024-12-09 06:04:20.349581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.007 [2024-12-09 06:04:20.349596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:05:29.007 [2024-12-09 06:04:20.349617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:05:29.007 [2024-12-09 06:04:20.349632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:05:29.007 [2024-12-09 06:04:20.349671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.007 [2024-12-09 06:04:20.349690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:05:29.007 [2024-12-09 06:04:20.349718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.007 [2024-12-09 06:04:20.349735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:05:29.007 [2024-12-09 06:04:20.349756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.007 [2024-12-09 06:04:20.349771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:05:29.007 [2024-12-09 06:04:20.349793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.007 [2024-12-09 06:04:20.349808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:05:29.007 [2024-12-09 06:04:20.349830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:05:29.007 [2024-12-09 06:04:20.349845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:05:29.007 8340.06 IOPS, 32.58 MiB/s [2024-12-09T06:04:23.593Z] 8359.53 IOPS, 32.65 MiB/s [2024-12-09T06:04:23.593Z] 8375.76 IOPS, 32.72 MiB/s [2024-12-09T06:04:23.593Z] Received shutdown signal, test time was about 37.452611 seconds 01:05:29.007 01:05:29.007 Latency(us) 01:05:29.007 [2024-12-09T06:04:23.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:05:29.007 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:05:29.007 Verification LBA range: start 0x0 length 0x4000 01:05:29.007 Nvme0n1 : 37.45 8380.11 32.73 0.00 0.00 15243.84 181.53 4087539.90 01:05:29.007 [2024-12-09T06:04:23.593Z] =================================================================================================================== 01:05:29.007 [2024-12-09T06:04:23.593Z] Total : 8380.11 32.73 0.00 0.00 15243.84 181.53 4087539.90 01:05:29.007 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:05:29.266 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 01:05:29.266 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:05:29.266 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 01:05:29.266 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 01:05:29.266 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:05:29.525 rmmod nvme_tcp 01:05:29.525 rmmod nvme_fabrics 01:05:29.525 rmmod nvme_keyring 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 89009 ']' 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 89009 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89009 ']' 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89009 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89009 01:05:29.525 killing process with pid 89009 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89009' 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89009 01:05:29.525 06:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89009 01:05:29.525 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:05:29.525 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:05:29.525 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:05:29.525 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 01:05:29.525 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 01:05:29.525 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:05:29.525 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:29.783 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:29.784 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 01:05:29.784 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:29.784 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:29.784 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:29.784 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 01:05:29.784 01:05:29.784 real 0m42.793s 01:05:29.784 user 2m22.070s 01:05:29.784 sys 0m9.998s 01:05:29.784 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:29.784 06:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 01:05:29.784 ************************************ 01:05:29.784 END TEST nvmf_host_multipath_status 01:05:29.784 ************************************ 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:05:30.043 ************************************ 01:05:30.043 START TEST nvmf_discovery_remove_ifc 01:05:30.043 ************************************ 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 01:05:30.043 * Looking for test storage... 01:05:30.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 01:05:30.043 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:05:30.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:30.044 --rc genhtml_branch_coverage=1 01:05:30.044 --rc genhtml_function_coverage=1 01:05:30.044 --rc genhtml_legend=1 01:05:30.044 --rc geninfo_all_blocks=1 01:05:30.044 --rc geninfo_unexecuted_blocks=1 01:05:30.044 01:05:30.044 ' 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:05:30.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:30.044 --rc genhtml_branch_coverage=1 01:05:30.044 --rc genhtml_function_coverage=1 01:05:30.044 --rc genhtml_legend=1 01:05:30.044 --rc geninfo_all_blocks=1 01:05:30.044 --rc geninfo_unexecuted_blocks=1 01:05:30.044 01:05:30.044 ' 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:05:30.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:30.044 --rc genhtml_branch_coverage=1 01:05:30.044 --rc genhtml_function_coverage=1 01:05:30.044 --rc genhtml_legend=1 01:05:30.044 --rc geninfo_all_blocks=1 01:05:30.044 --rc geninfo_unexecuted_blocks=1 01:05:30.044 01:05:30.044 ' 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:05:30.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:30.044 --rc genhtml_branch_coverage=1 01:05:30.044 --rc genhtml_function_coverage=1 01:05:30.044 --rc genhtml_legend=1 01:05:30.044 --rc geninfo_all_blocks=1 01:05:30.044 --rc geninfo_unexecuted_blocks=1 01:05:30.044 01:05:30.044 ' 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:30.044 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:05:30.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:05:30.045 Cannot find device "nvmf_init_br" 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 01:05:30.045 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:05:30.304 Cannot find device "nvmf_init_br2" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:05:30.304 Cannot find device "nvmf_tgt_br" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:05:30.304 Cannot find device "nvmf_tgt_br2" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:05:30.304 Cannot find device "nvmf_init_br" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:05:30.304 Cannot find device "nvmf_init_br2" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:05:30.304 Cannot find device "nvmf_tgt_br" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:05:30.304 Cannot find device "nvmf_tgt_br2" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:05:30.304 Cannot find device "nvmf_br" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:05:30.304 Cannot find device "nvmf_init_if" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:05:30.304 Cannot find device "nvmf_init_if2" 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:30.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:30.304 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:05:30.304 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:05:30.305 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:05:30.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:05:30.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 01:05:30.564 01:05:30.564 --- 10.0.0.3 ping statistics --- 01:05:30.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:30.564 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:05:30.564 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:05:30.564 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 01:05:30.564 01:05:30.564 --- 10.0.0.4 ping statistics --- 01:05:30.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:30.564 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:05:30.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:30.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 01:05:30.564 01:05:30.564 --- 10.0.0.1 ping statistics --- 01:05:30.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:30.564 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:05:30.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:30.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 01:05:30.564 01:05:30.564 --- 10.0.0.2 ping statistics --- 01:05:30.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:30.564 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:05:30.564 06:04:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=90482 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 90482 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 90482 ']' 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:30.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:30.564 06:04:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:30.564 [2024-12-09 06:04:25.061315] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:05:30.564 [2024-12-09 06:04:25.061633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:05:30.822 [2024-12-09 06:04:25.205816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:30.822 [2024-12-09 06:04:25.252621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:05:30.822 [2024-12-09 06:04:25.252709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:05:30.822 [2024-12-09 06:04:25.252728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:05:30.822 [2024-12-09 06:04:25.252742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:05:30.822 [2024-12-09 06:04:25.252754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:05:30.822 [2024-12-09 06:04:25.253159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:31.754 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:31.755 [2024-12-09 06:04:26.144782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:05:31.755 [2024-12-09 06:04:26.152913] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:05:31.755 null0 01:05:31.755 [2024-12-09 06:04:26.184881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90532 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90532 /tmp/host.sock 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 90532 ']' 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:31.755 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:31.755 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:31.755 [2024-12-09 06:04:26.268351] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:05:31.755 [2024-12-09 06:04:26.268459] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90532 ] 01:05:32.013 [2024-12-09 06:04:26.414106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:05:32.013 [2024-12-09 06:04:26.447292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:32.013 06:04:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:33.387 [2024-12-09 06:04:27.568055] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:05:33.387 [2024-12-09 06:04:27.568104] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:05:33.387 [2024-12-09 06:04:27.568126] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:05:33.387 [2024-12-09 06:04:27.654206] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 01:05:33.387 [2024-12-09 06:04:27.708758] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:05:33.387 [2024-12-09 06:04:27.709671] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1cc1110:1 started. 01:05:33.387 [2024-12-09 06:04:27.711375] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:05:33.387 [2024-12-09 06:04:27.711441] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:05:33.387 [2024-12-09 06:04:27.711473] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:05:33.387 [2024-12-09 06:04:27.711493] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 01:05:33.387 [2024-12-09 06:04:27.711520] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:33.388 [2024-12-09 06:04:27.716541] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1cc1110 was disconnected and freed. delete nvme_qpair. 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:33.388 06:04:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:34.322 06:04:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:35.692 06:04:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:36.631 06:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:36.631 06:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:36.631 06:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:36.631 06:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:36.631 06:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:36.631 06:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:36.631 06:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:36.631 06:04:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:36.631 06:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:36.631 06:04:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:37.563 06:04:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:38.951 [2024-12-09 06:04:33.139232] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 01:05:38.951 [2024-12-09 06:04:33.139314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:05:38.951 [2024-12-09 06:04:33.139330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:38.951 [2024-12-09 06:04:33.139344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:05:38.951 [2024-12-09 06:04:33.139353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:38.951 [2024-12-09 06:04:33.139363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:05:38.951 [2024-12-09 06:04:33.139371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:38.951 [2024-12-09 06:04:33.139381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:05:38.951 [2024-12-09 06:04:33.139390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:38.951 [2024-12-09 06:04:33.139399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:05:38.951 [2024-12-09 06:04:33.139425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:38.951 [2024-12-09 06:04:33.139451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c03290 is same with the state(6) to be set 01:05:38.951 [2024-12-09 06:04:33.149228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c03290 (9): Bad file descriptor 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:38.951 06:04:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:38.951 [2024-12-09 06:04:33.159260] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:05:38.951 [2024-12-09 06:04:33.159299] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:05:38.951 [2024-12-09 06:04:33.159307] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:05:38.951 [2024-12-09 06:04:33.159312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:05:38.951 [2024-12-09 06:04:33.159363] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:39.887 [2024-12-09 06:04:34.212783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 01:05:39.887 [2024-12-09 06:04:34.212923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c03290 with addr=10.0.0.3, port=4420 01:05:39.887 [2024-12-09 06:04:34.212961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c03290 is same with the state(6) to be set 01:05:39.887 [2024-12-09 06:04:34.213031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c03290 (9): Bad file descriptor 01:05:39.887 [2024-12-09 06:04:34.213983] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 01:05:39.887 [2024-12-09 06:04:34.214074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:05:39.887 [2024-12-09 06:04:34.214099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:05:39.887 [2024-12-09 06:04:34.214123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:05:39.887 [2024-12-09 06:04:34.214143] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:05:39.887 [2024-12-09 06:04:34.214157] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:05:39.887 [2024-12-09 06:04:34.214168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:05:39.887 [2024-12-09 06:04:34.214188] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:05:39.887 [2024-12-09 06:04:34.214201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 01:05:39.887 06:04:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:40.823 [2024-12-09 06:04:35.214276] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:05:40.823 [2024-12-09 06:04:35.214321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:05:40.823 [2024-12-09 06:04:35.214353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:05:40.823 [2024-12-09 06:04:35.214364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:05:40.823 [2024-12-09 06:04:35.214374] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 01:05:40.823 [2024-12-09 06:04:35.214384] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:05:40.823 [2024-12-09 06:04:35.214391] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:05:40.823 [2024-12-09 06:04:35.214396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:05:40.823 [2024-12-09 06:04:35.214431] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 01:05:40.823 [2024-12-09 06:04:35.214487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:05:40.824 [2024-12-09 06:04:35.214503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:40.824 [2024-12-09 06:04:35.214518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:05:40.824 [2024-12-09 06:04:35.214528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:40.824 [2024-12-09 06:04:35.214539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:05:40.824 [2024-12-09 06:04:35.214549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:40.824 [2024-12-09 06:04:35.214559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:05:40.824 [2024-12-09 06:04:35.214568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:40.824 [2024-12-09 06:04:35.214578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 01:05:40.824 [2024-12-09 06:04:35.214587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:05:40.824 [2024-12-09 06:04:35.214597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 01:05:40.824 [2024-12-09 06:04:35.215002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2f800 (9): Bad file descriptor 01:05:40.824 [2024-12-09 06:04:35.216014] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 01:05:40.824 [2024-12-09 06:04:35.216040] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:05:40.824 06:04:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 01:05:41.858 06:04:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 01:05:42.792 [2024-12-09 06:04:37.220588] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:05:42.792 [2024-12-09 06:04:37.220631] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:05:42.792 [2024-12-09 06:04:37.220664] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:05:42.792 [2024-12-09 06:04:37.308752] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 01:05:42.792 [2024-12-09 06:04:37.370247] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 01:05:42.792 [2024-12-09 06:04:37.370991] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1c982d0:1 started. 01:05:42.792 [2024-12-09 06:04:37.372193] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 01:05:42.792 [2024-12-09 06:04:37.372244] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 01:05:42.792 [2024-12-09 06:04:37.372268] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 01:05:42.792 [2024-12-09 06:04:37.372286] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 01:05:42.792 [2024-12-09 06:04:37.372296] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:05:43.049 [2024-12-09 06:04:37.379253] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1c982d0 was disconnected and freed. delete nvme_qpair. 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90532 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 90532 ']' 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 90532 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90532 01:05:43.049 killing process with pid 90532 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90532' 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 90532 01:05:43.049 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 90532 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:05:43.307 rmmod nvme_tcp 01:05:43.307 rmmod nvme_fabrics 01:05:43.307 rmmod nvme_keyring 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 90482 ']' 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 90482 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 90482 ']' 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 90482 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90482 01:05:43.307 killing process with pid 90482 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90482' 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 90482 01:05:43.307 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 90482 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:05:43.565 06:04:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:43.565 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 01:05:43.823 01:05:43.823 real 0m13.798s 01:05:43.823 user 0m24.036s 01:05:43.823 sys 0m1.617s 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 01:05:43.823 ************************************ 01:05:43.823 END TEST nvmf_discovery_remove_ifc 01:05:43.823 ************************************ 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:05:43.823 ************************************ 01:05:43.823 START TEST nvmf_identify_kernel_target 01:05:43.823 ************************************ 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 01:05:43.823 * Looking for test storage... 01:05:43.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 01:05:43.823 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:05:44.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:44.083 --rc genhtml_branch_coverage=1 01:05:44.083 --rc genhtml_function_coverage=1 01:05:44.083 --rc genhtml_legend=1 01:05:44.083 --rc geninfo_all_blocks=1 01:05:44.083 --rc geninfo_unexecuted_blocks=1 01:05:44.083 01:05:44.083 ' 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:05:44.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:44.083 --rc genhtml_branch_coverage=1 01:05:44.083 --rc genhtml_function_coverage=1 01:05:44.083 --rc genhtml_legend=1 01:05:44.083 --rc geninfo_all_blocks=1 01:05:44.083 --rc geninfo_unexecuted_blocks=1 01:05:44.083 01:05:44.083 ' 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:05:44.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:44.083 --rc genhtml_branch_coverage=1 01:05:44.083 --rc genhtml_function_coverage=1 01:05:44.083 --rc genhtml_legend=1 01:05:44.083 --rc geninfo_all_blocks=1 01:05:44.083 --rc geninfo_unexecuted_blocks=1 01:05:44.083 01:05:44.083 ' 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:05:44.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:44.083 --rc genhtml_branch_coverage=1 01:05:44.083 --rc genhtml_function_coverage=1 01:05:44.083 --rc genhtml_legend=1 01:05:44.083 --rc geninfo_all_blocks=1 01:05:44.083 --rc geninfo_unexecuted_blocks=1 01:05:44.083 01:05:44.083 ' 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:44.083 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:05:44.084 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:05:44.084 Cannot find device "nvmf_init_br" 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:05:44.084 Cannot find device "nvmf_init_br2" 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:05:44.084 Cannot find device "nvmf_tgt_br" 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:05:44.084 Cannot find device "nvmf_tgt_br2" 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:05:44.084 Cannot find device "nvmf_init_br" 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:05:44.084 Cannot find device "nvmf_init_br2" 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:05:44.084 Cannot find device "nvmf_tgt_br" 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:05:44.084 Cannot find device "nvmf_tgt_br2" 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 01:05:44.084 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:05:44.084 Cannot find device "nvmf_br" 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:05:44.085 Cannot find device "nvmf_init_if" 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:05:44.085 Cannot find device "nvmf_init_if2" 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:44.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:44.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:05:44.085 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:05:44.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:05:44.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 01:05:44.344 01:05:44.344 --- 10.0.0.3 ping statistics --- 01:05:44.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:44.344 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 01:05:44.344 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:05:44.344 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 01:05:44.344 01:05:44.344 --- 10.0.0.4 ping statistics --- 01:05:44.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:44.344 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:05:44.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:44.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 01:05:44.344 01:05:44.344 --- 10.0.0.1 ping statistics --- 01:05:44.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:44.344 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:05:44.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:44.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 01:05:44.344 01:05:44.344 --- 10.0.0.2 ping statistics --- 01:05:44.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:44.344 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 01:05:44.344 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:05:44.345 06:04:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:05:44.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:05:44.861 Waiting for block devices as requested 01:05:44.861 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:05:44.861 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:05:45.119 No valid GPT data, bailing 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:05:45.119 No valid GPT data, bailing 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:05:45.119 No valid GPT data, bailing 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:05:45.119 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:05:45.120 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:05:45.120 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:05:45.120 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:05:45.377 No valid GPT data, bailing 01:05:45.377 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:05:45.377 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 01:05:45.377 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 01:05:45.377 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:05:45.377 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:05:45.377 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -a 10.0.0.1 -t tcp -s 4420 01:05:45.378 01:05:45.378 Discovery Log Number of Records 2, Generation counter 2 01:05:45.378 =====Discovery Log Entry 0====== 01:05:45.378 trtype: tcp 01:05:45.378 adrfam: ipv4 01:05:45.378 subtype: current discovery subsystem 01:05:45.378 treq: not specified, sq flow control disable supported 01:05:45.378 portid: 1 01:05:45.378 trsvcid: 4420 01:05:45.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:05:45.378 traddr: 10.0.0.1 01:05:45.378 eflags: none 01:05:45.378 sectype: none 01:05:45.378 =====Discovery Log Entry 1====== 01:05:45.378 trtype: tcp 01:05:45.378 adrfam: ipv4 01:05:45.378 subtype: nvme subsystem 01:05:45.378 treq: not specified, sq flow control disable supported 01:05:45.378 portid: 1 01:05:45.378 trsvcid: 4420 01:05:45.378 subnqn: nqn.2016-06.io.spdk:testnqn 01:05:45.378 traddr: 10.0.0.1 01:05:45.378 eflags: none 01:05:45.378 sectype: none 01:05:45.378 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 01:05:45.378 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 01:05:45.636 ===================================================== 01:05:45.636 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 01:05:45.636 ===================================================== 01:05:45.636 Controller Capabilities/Features 01:05:45.636 ================================ 01:05:45.636 Vendor ID: 0000 01:05:45.636 Subsystem Vendor ID: 0000 01:05:45.636 Serial Number: 7cacd0152c6f386e0aa8 01:05:45.636 Model Number: Linux 01:05:45.636 Firmware Version: 6.8.9-20 01:05:45.636 Recommended Arb Burst: 0 01:05:45.636 IEEE OUI Identifier: 00 00 00 01:05:45.636 Multi-path I/O 01:05:45.636 May have multiple subsystem ports: No 01:05:45.636 May have multiple controllers: No 01:05:45.636 Associated with SR-IOV VF: No 01:05:45.636 Max Data Transfer Size: Unlimited 01:05:45.636 Max Number of Namespaces: 0 01:05:45.636 Max Number of I/O Queues: 1024 01:05:45.636 NVMe Specification Version (VS): 1.3 01:05:45.636 NVMe Specification Version (Identify): 1.3 01:05:45.636 Maximum Queue Entries: 1024 01:05:45.636 Contiguous Queues Required: No 01:05:45.636 Arbitration Mechanisms Supported 01:05:45.636 Weighted Round Robin: Not Supported 01:05:45.636 Vendor Specific: Not Supported 01:05:45.636 Reset Timeout: 7500 ms 01:05:45.636 Doorbell Stride: 4 bytes 01:05:45.636 NVM Subsystem Reset: Not Supported 01:05:45.636 Command Sets Supported 01:05:45.636 NVM Command Set: Supported 01:05:45.636 Boot Partition: Not Supported 01:05:45.636 Memory Page Size Minimum: 4096 bytes 01:05:45.636 Memory Page Size Maximum: 4096 bytes 01:05:45.636 Persistent Memory Region: Not Supported 01:05:45.636 Optional Asynchronous Events Supported 01:05:45.636 Namespace Attribute Notices: Not Supported 01:05:45.636 Firmware Activation Notices: Not Supported 01:05:45.636 ANA Change Notices: Not Supported 01:05:45.636 PLE Aggregate Log Change Notices: Not Supported 01:05:45.636 LBA Status Info Alert Notices: Not Supported 01:05:45.636 EGE Aggregate Log Change Notices: Not Supported 01:05:45.636 Normal NVM Subsystem Shutdown event: Not Supported 01:05:45.636 Zone Descriptor Change Notices: Not Supported 01:05:45.636 Discovery Log Change Notices: Supported 01:05:45.636 Controller Attributes 01:05:45.637 128-bit Host Identifier: Not Supported 01:05:45.637 Non-Operational Permissive Mode: Not Supported 01:05:45.637 NVM Sets: Not Supported 01:05:45.637 Read Recovery Levels: Not Supported 01:05:45.637 Endurance Groups: Not Supported 01:05:45.637 Predictable Latency Mode: Not Supported 01:05:45.637 Traffic Based Keep ALive: Not Supported 01:05:45.637 Namespace Granularity: Not Supported 01:05:45.637 SQ Associations: Not Supported 01:05:45.637 UUID List: Not Supported 01:05:45.637 Multi-Domain Subsystem: Not Supported 01:05:45.637 Fixed Capacity Management: Not Supported 01:05:45.637 Variable Capacity Management: Not Supported 01:05:45.637 Delete Endurance Group: Not Supported 01:05:45.637 Delete NVM Set: Not Supported 01:05:45.637 Extended LBA Formats Supported: Not Supported 01:05:45.637 Flexible Data Placement Supported: Not Supported 01:05:45.637 01:05:45.637 Controller Memory Buffer Support 01:05:45.637 ================================ 01:05:45.637 Supported: No 01:05:45.637 01:05:45.637 Persistent Memory Region Support 01:05:45.637 ================================ 01:05:45.637 Supported: No 01:05:45.637 01:05:45.637 Admin Command Set Attributes 01:05:45.637 ============================ 01:05:45.637 Security Send/Receive: Not Supported 01:05:45.637 Format NVM: Not Supported 01:05:45.637 Firmware Activate/Download: Not Supported 01:05:45.637 Namespace Management: Not Supported 01:05:45.637 Device Self-Test: Not Supported 01:05:45.637 Directives: Not Supported 01:05:45.637 NVMe-MI: Not Supported 01:05:45.637 Virtualization Management: Not Supported 01:05:45.637 Doorbell Buffer Config: Not Supported 01:05:45.637 Get LBA Status Capability: Not Supported 01:05:45.637 Command & Feature Lockdown Capability: Not Supported 01:05:45.637 Abort Command Limit: 1 01:05:45.637 Async Event Request Limit: 1 01:05:45.637 Number of Firmware Slots: N/A 01:05:45.637 Firmware Slot 1 Read-Only: N/A 01:05:45.637 Firmware Activation Without Reset: N/A 01:05:45.637 Multiple Update Detection Support: N/A 01:05:45.637 Firmware Update Granularity: No Information Provided 01:05:45.637 Per-Namespace SMART Log: No 01:05:45.637 Asymmetric Namespace Access Log Page: Not Supported 01:05:45.637 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 01:05:45.637 Command Effects Log Page: Not Supported 01:05:45.637 Get Log Page Extended Data: Supported 01:05:45.637 Telemetry Log Pages: Not Supported 01:05:45.637 Persistent Event Log Pages: Not Supported 01:05:45.637 Supported Log Pages Log Page: May Support 01:05:45.637 Commands Supported & Effects Log Page: Not Supported 01:05:45.637 Feature Identifiers & Effects Log Page:May Support 01:05:45.637 NVMe-MI Commands & Effects Log Page: May Support 01:05:45.637 Data Area 4 for Telemetry Log: Not Supported 01:05:45.637 Error Log Page Entries Supported: 1 01:05:45.637 Keep Alive: Not Supported 01:05:45.637 01:05:45.637 NVM Command Set Attributes 01:05:45.637 ========================== 01:05:45.637 Submission Queue Entry Size 01:05:45.637 Max: 1 01:05:45.637 Min: 1 01:05:45.637 Completion Queue Entry Size 01:05:45.637 Max: 1 01:05:45.637 Min: 1 01:05:45.637 Number of Namespaces: 0 01:05:45.637 Compare Command: Not Supported 01:05:45.637 Write Uncorrectable Command: Not Supported 01:05:45.637 Dataset Management Command: Not Supported 01:05:45.637 Write Zeroes Command: Not Supported 01:05:45.637 Set Features Save Field: Not Supported 01:05:45.637 Reservations: Not Supported 01:05:45.637 Timestamp: Not Supported 01:05:45.637 Copy: Not Supported 01:05:45.637 Volatile Write Cache: Not Present 01:05:45.637 Atomic Write Unit (Normal): 1 01:05:45.637 Atomic Write Unit (PFail): 1 01:05:45.637 Atomic Compare & Write Unit: 1 01:05:45.637 Fused Compare & Write: Not Supported 01:05:45.637 Scatter-Gather List 01:05:45.637 SGL Command Set: Supported 01:05:45.637 SGL Keyed: Not Supported 01:05:45.637 SGL Bit Bucket Descriptor: Not Supported 01:05:45.637 SGL Metadata Pointer: Not Supported 01:05:45.637 Oversized SGL: Not Supported 01:05:45.637 SGL Metadata Address: Not Supported 01:05:45.637 SGL Offset: Supported 01:05:45.637 Transport SGL Data Block: Not Supported 01:05:45.637 Replay Protected Memory Block: Not Supported 01:05:45.637 01:05:45.637 Firmware Slot Information 01:05:45.637 ========================= 01:05:45.637 Active slot: 0 01:05:45.637 01:05:45.637 01:05:45.637 Error Log 01:05:45.637 ========= 01:05:45.637 01:05:45.637 Active Namespaces 01:05:45.637 ================= 01:05:45.637 Discovery Log Page 01:05:45.637 ================== 01:05:45.637 Generation Counter: 2 01:05:45.637 Number of Records: 2 01:05:45.637 Record Format: 0 01:05:45.637 01:05:45.637 Discovery Log Entry 0 01:05:45.637 ---------------------- 01:05:45.637 Transport Type: 3 (TCP) 01:05:45.637 Address Family: 1 (IPv4) 01:05:45.637 Subsystem Type: 3 (Current Discovery Subsystem) 01:05:45.637 Entry Flags: 01:05:45.637 Duplicate Returned Information: 0 01:05:45.637 Explicit Persistent Connection Support for Discovery: 0 01:05:45.637 Transport Requirements: 01:05:45.637 Secure Channel: Not Specified 01:05:45.637 Port ID: 1 (0x0001) 01:05:45.637 Controller ID: 65535 (0xffff) 01:05:45.637 Admin Max SQ Size: 32 01:05:45.637 Transport Service Identifier: 4420 01:05:45.637 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 01:05:45.637 Transport Address: 10.0.0.1 01:05:45.637 Discovery Log Entry 1 01:05:45.637 ---------------------- 01:05:45.637 Transport Type: 3 (TCP) 01:05:45.637 Address Family: 1 (IPv4) 01:05:45.637 Subsystem Type: 2 (NVM Subsystem) 01:05:45.637 Entry Flags: 01:05:45.637 Duplicate Returned Information: 0 01:05:45.637 Explicit Persistent Connection Support for Discovery: 0 01:05:45.637 Transport Requirements: 01:05:45.637 Secure Channel: Not Specified 01:05:45.637 Port ID: 1 (0x0001) 01:05:45.637 Controller ID: 65535 (0xffff) 01:05:45.637 Admin Max SQ Size: 32 01:05:45.637 Transport Service Identifier: 4420 01:05:45.637 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 01:05:45.637 Transport Address: 10.0.0.1 01:05:45.638 06:04:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:05:45.638 get_feature(0x01) failed 01:05:45.638 get_feature(0x02) failed 01:05:45.638 get_feature(0x04) failed 01:05:45.638 ===================================================== 01:05:45.638 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:05:45.638 ===================================================== 01:05:45.638 Controller Capabilities/Features 01:05:45.638 ================================ 01:05:45.638 Vendor ID: 0000 01:05:45.638 Subsystem Vendor ID: 0000 01:05:45.638 Serial Number: c490f367ce9cb9a82303 01:05:45.638 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 01:05:45.638 Firmware Version: 6.8.9-20 01:05:45.638 Recommended Arb Burst: 6 01:05:45.638 IEEE OUI Identifier: 00 00 00 01:05:45.638 Multi-path I/O 01:05:45.638 May have multiple subsystem ports: Yes 01:05:45.638 May have multiple controllers: Yes 01:05:45.638 Associated with SR-IOV VF: No 01:05:45.638 Max Data Transfer Size: Unlimited 01:05:45.638 Max Number of Namespaces: 1024 01:05:45.638 Max Number of I/O Queues: 128 01:05:45.638 NVMe Specification Version (VS): 1.3 01:05:45.638 NVMe Specification Version (Identify): 1.3 01:05:45.638 Maximum Queue Entries: 1024 01:05:45.638 Contiguous Queues Required: No 01:05:45.638 Arbitration Mechanisms Supported 01:05:45.638 Weighted Round Robin: Not Supported 01:05:45.638 Vendor Specific: Not Supported 01:05:45.638 Reset Timeout: 7500 ms 01:05:45.638 Doorbell Stride: 4 bytes 01:05:45.638 NVM Subsystem Reset: Not Supported 01:05:45.638 Command Sets Supported 01:05:45.638 NVM Command Set: Supported 01:05:45.638 Boot Partition: Not Supported 01:05:45.638 Memory Page Size Minimum: 4096 bytes 01:05:45.638 Memory Page Size Maximum: 4096 bytes 01:05:45.638 Persistent Memory Region: Not Supported 01:05:45.638 Optional Asynchronous Events Supported 01:05:45.638 Namespace Attribute Notices: Supported 01:05:45.638 Firmware Activation Notices: Not Supported 01:05:45.638 ANA Change Notices: Supported 01:05:45.638 PLE Aggregate Log Change Notices: Not Supported 01:05:45.638 LBA Status Info Alert Notices: Not Supported 01:05:45.638 EGE Aggregate Log Change Notices: Not Supported 01:05:45.638 Normal NVM Subsystem Shutdown event: Not Supported 01:05:45.638 Zone Descriptor Change Notices: Not Supported 01:05:45.638 Discovery Log Change Notices: Not Supported 01:05:45.638 Controller Attributes 01:05:45.638 128-bit Host Identifier: Supported 01:05:45.638 Non-Operational Permissive Mode: Not Supported 01:05:45.638 NVM Sets: Not Supported 01:05:45.638 Read Recovery Levels: Not Supported 01:05:45.638 Endurance Groups: Not Supported 01:05:45.638 Predictable Latency Mode: Not Supported 01:05:45.638 Traffic Based Keep ALive: Supported 01:05:45.638 Namespace Granularity: Not Supported 01:05:45.638 SQ Associations: Not Supported 01:05:45.638 UUID List: Not Supported 01:05:45.638 Multi-Domain Subsystem: Not Supported 01:05:45.638 Fixed Capacity Management: Not Supported 01:05:45.638 Variable Capacity Management: Not Supported 01:05:45.638 Delete Endurance Group: Not Supported 01:05:45.638 Delete NVM Set: Not Supported 01:05:45.638 Extended LBA Formats Supported: Not Supported 01:05:45.638 Flexible Data Placement Supported: Not Supported 01:05:45.638 01:05:45.638 Controller Memory Buffer Support 01:05:45.638 ================================ 01:05:45.638 Supported: No 01:05:45.638 01:05:45.638 Persistent Memory Region Support 01:05:45.638 ================================ 01:05:45.638 Supported: No 01:05:45.638 01:05:45.638 Admin Command Set Attributes 01:05:45.638 ============================ 01:05:45.638 Security Send/Receive: Not Supported 01:05:45.638 Format NVM: Not Supported 01:05:45.638 Firmware Activate/Download: Not Supported 01:05:45.638 Namespace Management: Not Supported 01:05:45.638 Device Self-Test: Not Supported 01:05:45.638 Directives: Not Supported 01:05:45.638 NVMe-MI: Not Supported 01:05:45.638 Virtualization Management: Not Supported 01:05:45.638 Doorbell Buffer Config: Not Supported 01:05:45.638 Get LBA Status Capability: Not Supported 01:05:45.638 Command & Feature Lockdown Capability: Not Supported 01:05:45.638 Abort Command Limit: 4 01:05:45.638 Async Event Request Limit: 4 01:05:45.638 Number of Firmware Slots: N/A 01:05:45.638 Firmware Slot 1 Read-Only: N/A 01:05:45.638 Firmware Activation Without Reset: N/A 01:05:45.638 Multiple Update Detection Support: N/A 01:05:45.638 Firmware Update Granularity: No Information Provided 01:05:45.638 Per-Namespace SMART Log: Yes 01:05:45.638 Asymmetric Namespace Access Log Page: Supported 01:05:45.638 ANA Transition Time : 10 sec 01:05:45.638 01:05:45.638 Asymmetric Namespace Access Capabilities 01:05:45.638 ANA Optimized State : Supported 01:05:45.638 ANA Non-Optimized State : Supported 01:05:45.638 ANA Inaccessible State : Supported 01:05:45.638 ANA Persistent Loss State : Supported 01:05:45.638 ANA Change State : Supported 01:05:45.638 ANAGRPID is not changed : No 01:05:45.638 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 01:05:45.638 01:05:45.638 ANA Group Identifier Maximum : 128 01:05:45.638 Number of ANA Group Identifiers : 128 01:05:45.638 Max Number of Allowed Namespaces : 1024 01:05:45.638 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 01:05:45.638 Command Effects Log Page: Supported 01:05:45.638 Get Log Page Extended Data: Supported 01:05:45.638 Telemetry Log Pages: Not Supported 01:05:45.638 Persistent Event Log Pages: Not Supported 01:05:45.638 Supported Log Pages Log Page: May Support 01:05:45.638 Commands Supported & Effects Log Page: Not Supported 01:05:45.638 Feature Identifiers & Effects Log Page:May Support 01:05:45.638 NVMe-MI Commands & Effects Log Page: May Support 01:05:45.638 Data Area 4 for Telemetry Log: Not Supported 01:05:45.638 Error Log Page Entries Supported: 128 01:05:45.638 Keep Alive: Supported 01:05:45.639 Keep Alive Granularity: 1000 ms 01:05:45.639 01:05:45.639 NVM Command Set Attributes 01:05:45.639 ========================== 01:05:45.639 Submission Queue Entry Size 01:05:45.639 Max: 64 01:05:45.639 Min: 64 01:05:45.639 Completion Queue Entry Size 01:05:45.639 Max: 16 01:05:45.639 Min: 16 01:05:45.639 Number of Namespaces: 1024 01:05:45.639 Compare Command: Not Supported 01:05:45.639 Write Uncorrectable Command: Not Supported 01:05:45.639 Dataset Management Command: Supported 01:05:45.639 Write Zeroes Command: Supported 01:05:45.639 Set Features Save Field: Not Supported 01:05:45.639 Reservations: Not Supported 01:05:45.639 Timestamp: Not Supported 01:05:45.639 Copy: Not Supported 01:05:45.639 Volatile Write Cache: Present 01:05:45.639 Atomic Write Unit (Normal): 1 01:05:45.639 Atomic Write Unit (PFail): 1 01:05:45.639 Atomic Compare & Write Unit: 1 01:05:45.639 Fused Compare & Write: Not Supported 01:05:45.639 Scatter-Gather List 01:05:45.639 SGL Command Set: Supported 01:05:45.639 SGL Keyed: Not Supported 01:05:45.639 SGL Bit Bucket Descriptor: Not Supported 01:05:45.639 SGL Metadata Pointer: Not Supported 01:05:45.639 Oversized SGL: Not Supported 01:05:45.639 SGL Metadata Address: Not Supported 01:05:45.639 SGL Offset: Supported 01:05:45.639 Transport SGL Data Block: Not Supported 01:05:45.639 Replay Protected Memory Block: Not Supported 01:05:45.639 01:05:45.639 Firmware Slot Information 01:05:45.639 ========================= 01:05:45.639 Active slot: 0 01:05:45.639 01:05:45.639 Asymmetric Namespace Access 01:05:45.639 =========================== 01:05:45.639 Change Count : 0 01:05:45.639 Number of ANA Group Descriptors : 1 01:05:45.639 ANA Group Descriptor : 0 01:05:45.639 ANA Group ID : 1 01:05:45.639 Number of NSID Values : 1 01:05:45.639 Change Count : 0 01:05:45.639 ANA State : 1 01:05:45.639 Namespace Identifier : 1 01:05:45.639 01:05:45.639 Commands Supported and Effects 01:05:45.639 ============================== 01:05:45.639 Admin Commands 01:05:45.639 -------------- 01:05:45.639 Get Log Page (02h): Supported 01:05:45.639 Identify (06h): Supported 01:05:45.639 Abort (08h): Supported 01:05:45.639 Set Features (09h): Supported 01:05:45.639 Get Features (0Ah): Supported 01:05:45.639 Asynchronous Event Request (0Ch): Supported 01:05:45.639 Keep Alive (18h): Supported 01:05:45.639 I/O Commands 01:05:45.639 ------------ 01:05:45.639 Flush (00h): Supported 01:05:45.639 Write (01h): Supported LBA-Change 01:05:45.639 Read (02h): Supported 01:05:45.639 Write Zeroes (08h): Supported LBA-Change 01:05:45.639 Dataset Management (09h): Supported 01:05:45.639 01:05:45.639 Error Log 01:05:45.639 ========= 01:05:45.639 Entry: 0 01:05:45.639 Error Count: 0x3 01:05:45.639 Submission Queue Id: 0x0 01:05:45.639 Command Id: 0x5 01:05:45.639 Phase Bit: 0 01:05:45.639 Status Code: 0x2 01:05:45.639 Status Code Type: 0x0 01:05:45.639 Do Not Retry: 1 01:05:45.639 Error Location: 0x28 01:05:45.639 LBA: 0x0 01:05:45.639 Namespace: 0x0 01:05:45.639 Vendor Log Page: 0x0 01:05:45.639 ----------- 01:05:45.639 Entry: 1 01:05:45.639 Error Count: 0x2 01:05:45.639 Submission Queue Id: 0x0 01:05:45.639 Command Id: 0x5 01:05:45.639 Phase Bit: 0 01:05:45.639 Status Code: 0x2 01:05:45.639 Status Code Type: 0x0 01:05:45.639 Do Not Retry: 1 01:05:45.639 Error Location: 0x28 01:05:45.639 LBA: 0x0 01:05:45.639 Namespace: 0x0 01:05:45.639 Vendor Log Page: 0x0 01:05:45.639 ----------- 01:05:45.639 Entry: 2 01:05:45.639 Error Count: 0x1 01:05:45.639 Submission Queue Id: 0x0 01:05:45.639 Command Id: 0x4 01:05:45.639 Phase Bit: 0 01:05:45.639 Status Code: 0x2 01:05:45.639 Status Code Type: 0x0 01:05:45.639 Do Not Retry: 1 01:05:45.639 Error Location: 0x28 01:05:45.639 LBA: 0x0 01:05:45.639 Namespace: 0x0 01:05:45.639 Vendor Log Page: 0x0 01:05:45.639 01:05:45.639 Number of Queues 01:05:45.639 ================ 01:05:45.639 Number of I/O Submission Queues: 128 01:05:45.639 Number of I/O Completion Queues: 128 01:05:45.639 01:05:45.639 ZNS Specific Controller Data 01:05:45.639 ============================ 01:05:45.639 Zone Append Size Limit: 0 01:05:45.639 01:05:45.639 01:05:45.639 Active Namespaces 01:05:45.639 ================= 01:05:45.639 get_feature(0x05) failed 01:05:45.639 Namespace ID:1 01:05:45.639 Command Set Identifier: NVM (00h) 01:05:45.639 Deallocate: Supported 01:05:45.639 Deallocated/Unwritten Error: Not Supported 01:05:45.639 Deallocated Read Value: Unknown 01:05:45.639 Deallocate in Write Zeroes: Not Supported 01:05:45.639 Deallocated Guard Field: 0xFFFF 01:05:45.639 Flush: Supported 01:05:45.639 Reservation: Not Supported 01:05:45.639 Namespace Sharing Capabilities: Multiple Controllers 01:05:45.639 Size (in LBAs): 1310720 (5GiB) 01:05:45.639 Capacity (in LBAs): 1310720 (5GiB) 01:05:45.639 Utilization (in LBAs): 1310720 (5GiB) 01:05:45.639 UUID: 9525562e-3649-4ecd-ad3d-ebc93f16730f 01:05:45.639 Thin Provisioning: Not Supported 01:05:45.639 Per-NS Atomic Units: Yes 01:05:45.639 Atomic Boundary Size (Normal): 0 01:05:45.639 Atomic Boundary Size (PFail): 0 01:05:45.639 Atomic Boundary Offset: 0 01:05:45.639 NGUID/EUI64 Never Reused: No 01:05:45.639 ANA group ID: 1 01:05:45.639 Namespace Write Protected: No 01:05:45.639 Number of LBA Formats: 1 01:05:45.639 Current LBA Format: LBA Format #00 01:05:45.639 LBA Format #00: Data Size: 4096 Metadata Size: 0 01:05:45.639 01:05:45.639 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 01:05:45.639 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:05:45.640 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 01:05:45.640 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:05:45.640 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 01:05:45.640 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:05:45.640 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:05:45.640 rmmod nvme_tcp 01:05:45.897 rmmod nvme_fabrics 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:05:45.897 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:45.898 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:45.898 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:45.898 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 01:05:45.898 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 01:05:45.898 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:05:45.898 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 01:05:46.155 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:05:46.155 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:05:46.155 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:05:46.155 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:05:46.155 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:05:46.155 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:05:46.155 06:04:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:05:46.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:05:46.739 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:05:46.739 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:05:46.998 01:05:46.998 real 0m3.125s 01:05:46.998 user 0m1.112s 01:05:46.998 sys 0m1.410s 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:05:46.998 ************************************ 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 01:05:46.998 END TEST nvmf_identify_kernel_target 01:05:46.998 ************************************ 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:05:46.998 ************************************ 01:05:46.998 START TEST nvmf_auth_host 01:05:46.998 ************************************ 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 01:05:46.998 * Looking for test storage... 01:05:46.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 01:05:46.998 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:05:46.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:46.999 --rc genhtml_branch_coverage=1 01:05:46.999 --rc genhtml_function_coverage=1 01:05:46.999 --rc genhtml_legend=1 01:05:46.999 --rc geninfo_all_blocks=1 01:05:46.999 --rc geninfo_unexecuted_blocks=1 01:05:46.999 01:05:46.999 ' 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:05:46.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:46.999 --rc genhtml_branch_coverage=1 01:05:46.999 --rc genhtml_function_coverage=1 01:05:46.999 --rc genhtml_legend=1 01:05:46.999 --rc geninfo_all_blocks=1 01:05:46.999 --rc geninfo_unexecuted_blocks=1 01:05:46.999 01:05:46.999 ' 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:05:46.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:46.999 --rc genhtml_branch_coverage=1 01:05:46.999 --rc genhtml_function_coverage=1 01:05:46.999 --rc genhtml_legend=1 01:05:46.999 --rc geninfo_all_blocks=1 01:05:46.999 --rc geninfo_unexecuted_blocks=1 01:05:46.999 01:05:46.999 ' 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:05:46.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:05:46.999 --rc genhtml_branch_coverage=1 01:05:46.999 --rc genhtml_function_coverage=1 01:05:46.999 --rc genhtml_legend=1 01:05:46.999 --rc geninfo_all_blocks=1 01:05:46.999 --rc geninfo_unexecuted_blocks=1 01:05:46.999 01:05:46.999 ' 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:05:46.999 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:47.258 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:05:47.259 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:05:47.259 Cannot find device "nvmf_init_br" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:05:47.259 Cannot find device "nvmf_init_br2" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:05:47.259 Cannot find device "nvmf_tgt_br" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:05:47.259 Cannot find device "nvmf_tgt_br2" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:05:47.259 Cannot find device "nvmf_init_br" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:05:47.259 Cannot find device "nvmf_init_br2" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:05:47.259 Cannot find device "nvmf_tgt_br" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:05:47.259 Cannot find device "nvmf_tgt_br2" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:05:47.259 Cannot find device "nvmf_br" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:05:47.259 Cannot find device "nvmf_init_if" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:05:47.259 Cannot find device "nvmf_init_if2" 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:05:47.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:05:47.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:05:47.259 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:05:47.518 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:05:47.518 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:05:47.518 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:05:47.519 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:05:47.519 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 01:05:47.519 01:05:47.519 --- 10.0.0.3 ping statistics --- 01:05:47.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:47.519 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:05:47.519 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:05:47.519 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 01:05:47.519 01:05:47.519 --- 10.0.0.4 ping statistics --- 01:05:47.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:47.519 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:05:47.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:05:47.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:05:47.519 01:05:47.519 --- 10.0.0.1 ping statistics --- 01:05:47.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:47.519 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:05:47.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:05:47.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 01:05:47.519 01:05:47.519 --- 10.0.0.2 ping statistics --- 01:05:47.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:05:47.519 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=91529 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 91529 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91529 ']' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:47.519 06:04:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=25ccffd8b53e61cacfbfa15fbdae1879 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PtS 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 25ccffd8b53e61cacfbfa15fbdae1879 0 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 25ccffd8b53e61cacfbfa15fbdae1879 0 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=25ccffd8b53e61cacfbfa15fbdae1879 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:05:47.778 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PtS 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PtS 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.PtS 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4885ecae60ec00353cfd9aaba50f7d6a9264e5d79645a20fb611a93c603a647e 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bId 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4885ecae60ec00353cfd9aaba50f7d6a9264e5d79645a20fb611a93c603a647e 3 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4885ecae60ec00353cfd9aaba50f7d6a9264e5d79645a20fb611a93c603a647e 3 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4885ecae60ec00353cfd9aaba50f7d6a9264e5d79645a20fb611a93c603a647e 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bId 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bId 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.bId 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=052faa17afc8b9be0a70ce897fcb933636464a611eca2b35 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jnc 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 052faa17afc8b9be0a70ce897fcb933636464a611eca2b35 0 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 052faa17afc8b9be0a70ce897fcb933636464a611eca2b35 0 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=052faa17afc8b9be0a70ce897fcb933636464a611eca2b35 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jnc 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jnc 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jnc 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=69ac50c19c908b9ef24b42510a2aa4144309512891f997b4 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8QT 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 69ac50c19c908b9ef24b42510a2aa4144309512891f997b4 2 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 69ac50c19c908b9ef24b42510a2aa4144309512891f997b4 2 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=69ac50c19c908b9ef24b42510a2aa4144309512891f997b4 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8QT 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8QT 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.8QT 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=69cebe26f00b32451dd58e0c44c4fe40 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Vj5 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 69cebe26f00b32451dd58e0c44c4fe40 1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 69cebe26f00b32451dd58e0c44c4fe40 1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=69cebe26f00b32451dd58e0c44c4fe40 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Vj5 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Vj5 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Vj5 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2a3d34ee80aff588176fdd712db27aee 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Wfu 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2a3d34ee80aff588176fdd712db27aee 1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2a3d34ee80aff588176fdd712db27aee 1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2a3d34ee80aff588176fdd712db27aee 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 01:05:48.060 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Wfu 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Wfu 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Wfu 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d3e1a40f82321d4d999f8c0455f90bb2d0d609a1d5b7370 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.893 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d3e1a40f82321d4d999f8c0455f90bb2d0d609a1d5b7370 2 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d3e1a40f82321d4d999f8c0455f90bb2d0d609a1d5b7370 2 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d3e1a40f82321d4d999f8c0455f90bb2d0d609a1d5b7370 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.893 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.893 01:05:48.320 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.893 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3747ca8a27613bb22558d2397379f4b8 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1yN 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3747ca8a27613bb22558d2397379f4b8 0 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3747ca8a27613bb22558d2397379f4b8 0 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3747ca8a27613bb22558d2397379f4b8 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1yN 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1yN 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1yN 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ac365560036b081da1b2627dac6be3875afee02517d2819592ff910cabe0775a 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hix 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ac365560036b081da1b2627dac6be3875afee02517d2819592ff910cabe0775a 3 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ac365560036b081da1b2627dac6be3875afee02517d2819592ff910cabe0775a 3 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ac365560036b081da1b2627dac6be3875afee02517d2819592ff910cabe0775a 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hix 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hix 01:05:48.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hix 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91529 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91529 ']' 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 01:05:48.321 06:04:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PtS 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.bId ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bId 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jnc 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.8QT ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8QT 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Vj5 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Wfu ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Wfu 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.893 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1yN ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1yN 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hix 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:48.889 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:05:48.890 06:04:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:05:49.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:05:49.147 Waiting for block devices as requested 01:05:49.147 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:05:49.404 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:05:49.970 No valid GPT data, bailing 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:05:49.970 No valid GPT data, bailing 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:05:49.970 No valid GPT data, bailing 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:05:49.970 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:05:49.971 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:05:49.971 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:05:49.971 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:05:49.971 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:05:49.971 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:05:50.228 No valid GPT data, bailing 01:05:50.228 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:05:50.228 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 01:05:50.228 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 01:05:50.228 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -a 10.0.0.1 -t tcp -s 4420 01:05:50.229 01:05:50.229 Discovery Log Number of Records 2, Generation counter 2 01:05:50.229 =====Discovery Log Entry 0====== 01:05:50.229 trtype: tcp 01:05:50.229 adrfam: ipv4 01:05:50.229 subtype: current discovery subsystem 01:05:50.229 treq: not specified, sq flow control disable supported 01:05:50.229 portid: 1 01:05:50.229 trsvcid: 4420 01:05:50.229 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:05:50.229 traddr: 10.0.0.1 01:05:50.229 eflags: none 01:05:50.229 sectype: none 01:05:50.229 =====Discovery Log Entry 1====== 01:05:50.229 trtype: tcp 01:05:50.229 adrfam: ipv4 01:05:50.229 subtype: nvme subsystem 01:05:50.229 treq: not specified, sq flow control disable supported 01:05:50.229 portid: 1 01:05:50.229 trsvcid: 4420 01:05:50.229 subnqn: nqn.2024-02.io.spdk:cnode0 01:05:50.229 traddr: 10.0.0.1 01:05:50.229 eflags: none 01:05:50.229 sectype: none 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.229 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.487 nvme0n1 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:50.487 06:04:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:50.487 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:50.488 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:50.488 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.488 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.745 nvme0n1 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.745 nvme0n1 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:50.745 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:50.746 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:50.746 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:50.746 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:50.746 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.004 nvme0n1 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.004 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.347 nvme0n1 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.348 nvme0n1 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:51.348 06:04:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.645 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.903 nvme0n1 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:51.903 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.161 nvme0n1 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.161 nvme0n1 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.161 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.419 nvme0n1 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.419 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.420 06:04:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.677 nvme0n1 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:52.677 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:53.243 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.502 06:04:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:53.502 nvme0n1 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.502 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:53.770 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.770 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:53.770 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:53.770 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:53.770 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:53.770 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:53.771 nvme0n1 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:53.771 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.037 nvme0n1 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:05:54.037 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:54.038 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.296 nvme0n1 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.296 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.554 06:04:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.554 nvme0n1 01:05:54.554 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.554 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:54.554 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:54.554 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.554 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.554 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:54.813 06:04:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.712 06:04:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:56.712 nvme0n1 01:05:56.712 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.712 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:56.712 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:56.712 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.712 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:56.712 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:56.970 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:57.228 nvme0n1 01:05:57.228 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.228 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:57.228 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.228 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:57.228 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:57.228 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.228 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:57.228 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.229 06:04:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:57.794 nvme0n1 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:05:57.794 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:57.795 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:58.052 nvme0n1 01:05:58.052 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.052 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:58.052 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:58.052 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.052 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:58.052 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:05:58.309 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.310 06:04:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:58.586 nvme0n1 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:05:58.586 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:58.587 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:59.521 nvme0n1 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:05:59.521 06:04:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:00.089 nvme0n1 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:00.089 06:04:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:01.038 nvme0n1 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.038 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:01.624 nvme0n1 01:06:01.624 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.624 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:01.624 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:01.624 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.624 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:01.624 06:04:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:01.624 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:01.625 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.191 nvme0n1 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.191 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.449 nvme0n1 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.449 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.450 06:04:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.450 nvme0n1 01:06:02.450 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.450 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:02.450 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:02.450 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.450 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.450 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:02.707 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.708 nvme0n1 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:02.708 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.968 nvme0n1 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:02.968 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.227 nvme0n1 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:03.227 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.228 nvme0n1 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.228 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.486 nvme0n1 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.486 06:04:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.486 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.745 nvme0n1 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:03.745 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.003 nvme0n1 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.003 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.261 nvme0n1 01:06:04.261 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.261 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:04.261 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.261 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.261 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:04.261 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.261 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.262 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.519 nvme0n1 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.519 06:04:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.778 nvme0n1 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:04.778 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:04.779 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:04.779 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:04.779 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:04.779 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:04.779 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:04.779 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.038 nvme0n1 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.038 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.297 nvme0n1 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.297 06:04:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.555 nvme0n1 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:05.555 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:05.556 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.123 nvme0n1 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.123 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.382 nvme0n1 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:06.382 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.383 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:06.641 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:06.642 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.642 06:05:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.900 nvme0n1 01:06:06.900 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.900 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:06.900 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:06.900 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.900 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:06.901 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.470 nvme0n1 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:07.470 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:07.471 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:07.471 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:07.471 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:07.471 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:07.471 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:07.471 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.471 06:05:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.730 nvme0n1 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:07.730 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.667 nvme0n1 01:06:08.667 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:08.667 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:08.667 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:08.668 06:05:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:09.235 nvme0n1 01:06:09.235 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:09.235 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:09.235 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:09.235 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:09.235 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:09.235 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:09.236 06:05:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:09.812 nvme0n1 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:09.812 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:10.096 06:05:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:10.665 nvme0n1 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:10.665 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.236 nvme0n1 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.236 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.237 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.495 nvme0n1 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.495 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.496 06:05:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.496 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.754 nvme0n1 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:11.754 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.755 nvme0n1 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:11.755 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.013 nvme0n1 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.013 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.272 nvme0n1 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.272 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.531 nvme0n1 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.531 06:05:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.531 nvme0n1 01:06:12.531 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.531 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:12.531 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.531 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:12.531 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.531 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.790 nvme0n1 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:12.790 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.048 nvme0n1 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.048 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.049 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.307 nvme0n1 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.307 06:05:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.566 nvme0n1 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:13.566 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.567 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.826 nvme0n1 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:13.826 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.085 nvme0n1 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.085 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.344 nvme0n1 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.344 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.603 06:05:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.603 nvme0n1 01:06:14.603 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.603 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:14.603 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:14.603 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.603 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.603 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:14.863 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.122 nvme0n1 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:15.122 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:15.123 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:15.382 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:15.382 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:15.382 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:15.382 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:15.382 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.382 06:05:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.640 nvme0n1 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:15.640 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.205 nvme0n1 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.205 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.206 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.464 nvme0n1 01:06:16.464 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.464 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:16.464 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:16.464 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.464 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.464 06:05:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.464 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:16.464 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:16.464 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.464 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.722 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.981 nvme0n1 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjVjY2ZmZDhiNTNlNjFjYWNmYmZhMTVmYmRhZTE4NzmqmNGr: 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDg4NWVjYWU2MGVjMDAzNTNjZmQ5YWFiYTUwZjdkNmE5MjY0ZTVkNzk2NDVhMjBmYjYxMWE5M2M2MDNhNjQ3ZVUwDxE=: 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:16.981 06:05:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.554 nvme0n1 01:06:17.554 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:17.554 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:17.554 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:17.554 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.554 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 01:06:17.812 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:17.813 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.390 nvme0n1 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:18.390 06:05:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.334 nvme0n1 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGQzZTFhNDBmODIzMjFkNGQ5OTlmOGMwNDU1ZjkwYmIyZDBkNjA5YTFkNWI3Mzcwh9qfMg==: 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc0N2NhOGEyNzYxM2JiMjI1NThkMjM5NzM3OWY0YjjFNbk7: 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.334 06:05:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.902 nvme0n1 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWMzNjU1NjAwMzZiMDgxZGExYjI2MjdkYWM2YmUzODc1YWZlZTAyNTE3ZDI4MTk1OTJmZjkxMGNhYmUwNzc1YdwQQH4=: 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:19.902 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.472 nvme0n1 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.472 06:05:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.472 2024/12/09 06:05:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:06:20.472 request: 01:06:20.472 { 01:06:20.472 "method": "bdev_nvme_attach_controller", 01:06:20.472 "params": { 01:06:20.472 "name": "nvme0", 01:06:20.472 "trtype": "tcp", 01:06:20.472 "traddr": "10.0.0.1", 01:06:20.472 "adrfam": "ipv4", 01:06:20.472 "trsvcid": "4420", 01:06:20.472 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:06:20.472 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:06:20.472 "prchk_reftag": false, 01:06:20.472 "prchk_guard": false, 01:06:20.472 "hdgst": false, 01:06:20.472 "ddgst": false, 01:06:20.472 "allow_unrecognized_csi": false 01:06:20.472 } 01:06:20.472 } 01:06:20.472 Got JSON-RPC error response 01:06:20.472 GoRPCClient: error on JSON-RPC call 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.472 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.732 2024/12/09 06:05:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:06:20.732 request: 01:06:20.732 { 01:06:20.732 "method": "bdev_nvme_attach_controller", 01:06:20.732 "params": { 01:06:20.732 "name": "nvme0", 01:06:20.732 "trtype": "tcp", 01:06:20.732 "traddr": "10.0.0.1", 01:06:20.732 "adrfam": "ipv4", 01:06:20.732 "trsvcid": "4420", 01:06:20.732 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:06:20.732 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:06:20.732 "prchk_reftag": false, 01:06:20.732 "prchk_guard": false, 01:06:20.732 "hdgst": false, 01:06:20.732 "ddgst": false, 01:06:20.732 "dhchap_key": "key2", 01:06:20.732 "allow_unrecognized_csi": false 01:06:20.732 } 01:06:20.732 } 01:06:20.732 Got JSON-RPC error response 01:06:20.732 GoRPCClient: error on JSON-RPC call 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:06:20.732 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.733 2024/12/09 06:05:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:06:20.733 request: 01:06:20.733 { 01:06:20.733 "method": "bdev_nvme_attach_controller", 01:06:20.733 "params": { 01:06:20.733 "name": "nvme0", 01:06:20.733 "trtype": "tcp", 01:06:20.733 "traddr": "10.0.0.1", 01:06:20.733 "adrfam": "ipv4", 01:06:20.733 "trsvcid": "4420", 01:06:20.733 "subnqn": "nqn.2024-02.io.spdk:cnode0", 01:06:20.733 "hostnqn": "nqn.2024-02.io.spdk:host0", 01:06:20.733 "prchk_reftag": false, 01:06:20.733 "prchk_guard": false, 01:06:20.733 "hdgst": false, 01:06:20.733 "ddgst": false, 01:06:20.733 "dhchap_key": "key1", 01:06:20.733 "dhchap_ctrlr_key": "ckey2", 01:06:20.733 "allow_unrecognized_csi": false 01:06:20.733 } 01:06:20.733 } 01:06:20.733 Got JSON-RPC error response 01:06:20.733 GoRPCClient: error on JSON-RPC call 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.733 nvme0n1 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.733 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.991 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.991 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 01:06:20.991 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.991 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 01:06:20.991 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.991 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.991 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.992 2024/12/09 06:05:15 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 01:06:20.992 request: 01:06:20.992 { 01:06:20.992 "method": "bdev_nvme_set_keys", 01:06:20.992 "params": { 01:06:20.992 "name": "nvme0", 01:06:20.992 "dhchap_key": "key1", 01:06:20.992 "dhchap_ctrlr_key": "ckey2" 01:06:20.992 } 01:06:20.992 } 01:06:20.992 Got JSON-RPC error response 01:06:20.992 GoRPCClient: error on JSON-RPC call 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 01:06:20.992 06:05:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 01:06:21.927 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 01:06:21.927 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 01:06:21.927 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:21.927 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:21.927 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDUyZmFhMTdhZmM4YjliZTBhNzBjZTg5N2ZjYjkzMzYzNjQ2NGE2MTFlY2EyYjM11DRaPA==: 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: ]] 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjlhYzUwYzE5YzkwOGI5ZWYyNGI0MjUxMGEyYWE0MTQ0MzA5NTEyODkxZjk5N2I0iZ4JfQ==: 01:06:22.187 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.188 nvme0n1 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjljZWJlMjZmMDBiMzI0NTFkZDU4ZTBjNDRjNGZlNDDDHLQ0: 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: ]] 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmEzZDM0ZWU4MGFmZjU4ODE3NmZkZDcxMmRiMjdhZWVNHMR8: 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.188 2024/12/09 06:05:16 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 01:06:22.188 request: 01:06:22.188 { 01:06:22.188 "method": "bdev_nvme_set_keys", 01:06:22.188 "params": { 01:06:22.188 "name": "nvme0", 01:06:22.188 "dhchap_key": "key2", 01:06:22.188 "dhchap_ctrlr_key": "ckey1" 01:06:22.188 } 01:06:22.188 } 01:06:22.188 Got JSON-RPC error response 01:06:22.188 GoRPCClient: error on JSON-RPC call 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 01:06:22.188 06:05:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:06:23.566 rmmod nvme_tcp 01:06:23.566 rmmod nvme_fabrics 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 91529 ']' 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 91529 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 91529 ']' 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 91529 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91529 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:06:23.566 killing process with pid 91529 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91529' 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 91529 01:06:23.566 06:05:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 91529 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:06:23.566 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 01:06:23.825 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:06:23.826 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 01:06:23.826 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:06:23.826 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:06:23.826 06:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:06:24.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:06:24.651 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:06:24.651 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:06:24.651 06:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.PtS /tmp/spdk.key-null.jnc /tmp/spdk.key-sha256.Vj5 /tmp/spdk.key-sha384.893 /tmp/spdk.key-sha512.hix /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 01:06:24.651 06:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:06:25.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:06:25.218 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:06:25.218 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:06:25.218 01:06:25.218 real 0m38.182s 01:06:25.218 user 0m34.202s 01:06:25.218 sys 0m3.742s 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:25.218 ************************************ 01:06:25.218 END TEST nvmf_auth_host 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.218 ************************************ 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:06:25.218 ************************************ 01:06:25.218 START TEST nvmf_digest 01:06:25.218 ************************************ 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 01:06:25.218 * Looking for test storage... 01:06:25.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:06:25.218 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:06:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:25.477 --rc genhtml_branch_coverage=1 01:06:25.477 --rc genhtml_function_coverage=1 01:06:25.477 --rc genhtml_legend=1 01:06:25.477 --rc geninfo_all_blocks=1 01:06:25.477 --rc geninfo_unexecuted_blocks=1 01:06:25.477 01:06:25.477 ' 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:06:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:25.477 --rc genhtml_branch_coverage=1 01:06:25.477 --rc genhtml_function_coverage=1 01:06:25.477 --rc genhtml_legend=1 01:06:25.477 --rc geninfo_all_blocks=1 01:06:25.477 --rc geninfo_unexecuted_blocks=1 01:06:25.477 01:06:25.477 ' 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:06:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:25.477 --rc genhtml_branch_coverage=1 01:06:25.477 --rc genhtml_function_coverage=1 01:06:25.477 --rc genhtml_legend=1 01:06:25.477 --rc geninfo_all_blocks=1 01:06:25.477 --rc geninfo_unexecuted_blocks=1 01:06:25.477 01:06:25.477 ' 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:06:25.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:25.477 --rc genhtml_branch_coverage=1 01:06:25.477 --rc genhtml_function_coverage=1 01:06:25.477 --rc genhtml_legend=1 01:06:25.477 --rc geninfo_all_blocks=1 01:06:25.477 --rc geninfo_unexecuted_blocks=1 01:06:25.477 01:06:25.477 ' 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:25.477 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:06:25.478 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:06:25.478 Cannot find device "nvmf_init_br" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:06:25.478 Cannot find device "nvmf_init_br2" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:06:25.478 Cannot find device "nvmf_tgt_br" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:06:25.478 Cannot find device "nvmf_tgt_br2" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:06:25.478 Cannot find device "nvmf_init_br" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:06:25.478 Cannot find device "nvmf_init_br2" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:06:25.478 Cannot find device "nvmf_tgt_br" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:06:25.478 Cannot find device "nvmf_tgt_br2" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:06:25.478 Cannot find device "nvmf_br" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:06:25.478 Cannot find device "nvmf_init_if" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:06:25.478 Cannot find device "nvmf_init_if2" 01:06:25.478 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 01:06:25.479 06:05:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:25.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:25.479 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 01:06:25.479 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:25.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:25.479 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 01:06:25.479 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:06:25.479 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:25.479 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:06:25.479 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:25.479 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:06:25.737 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:06:25.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:25.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 01:06:25.738 01:06:25.738 --- 10.0.0.3 ping statistics --- 01:06:25.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:25.738 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:06:25.738 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:06:25.738 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 01:06:25.738 01:06:25.738 --- 10.0.0.4 ping statistics --- 01:06:25.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:25.738 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:25.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:25.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 01:06:25.738 01:06:25.738 --- 10.0.0.1 ping statistics --- 01:06:25.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:25.738 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:06:25.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:25.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 01:06:25.738 01:06:25.738 --- 10.0.0.2 ping statistics --- 01:06:25.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:25.738 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:06:25.738 ************************************ 01:06:25.738 START TEST nvmf_digest_clean 01:06:25.738 ************************************ 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=93194 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 93194 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93194 ']' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:25.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:25.738 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:25.996 [2024-12-09 06:05:20.371724] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:25.996 [2024-12-09 06:05:20.371834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:25.996 [2024-12-09 06:05:20.526556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:25.996 [2024-12-09 06:05:20.564073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:25.996 [2024-12-09 06:05:20.564141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:25.996 [2024-12-09 06:05:20.564156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:25.996 [2024-12-09 06:05:20.564166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:25.996 [2024-12-09 06:05:20.564175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:25.996 [2024-12-09 06:05:20.564535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:06:26.254 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:26.254 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:06:26.254 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:06:26.254 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:26.255 null0 01:06:26.255 [2024-12-09 06:05:20.750972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:26.255 [2024-12-09 06:05:20.775123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93236 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93236 /var/tmp/bperf.sock 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93236 ']' 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:26.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:26.255 06:05:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:26.513 [2024-12-09 06:05:20.840899] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:26.514 [2024-12-09 06:05:20.841030] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93236 ] 01:06:26.514 [2024-12-09 06:05:20.990925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:26.514 [2024-12-09 06:05:21.029556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:26.772 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:26.772 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:06:26.772 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:06:26.772 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:06:26.772 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:06:27.031 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:27.031 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:27.289 nvme0n1 01:06:27.289 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:06:27.289 06:05:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:27.548 Running I/O for 2 seconds... 01:06:29.421 17588.00 IOPS, 68.70 MiB/s [2024-12-09T06:05:24.007Z] 17421.00 IOPS, 68.05 MiB/s 01:06:29.421 Latency(us) 01:06:29.421 [2024-12-09T06:05:24.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:29.421 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:06:29.421 nvme0n1 : 2.00 17447.91 68.16 0.00 0.00 7327.96 4379.00 18469.24 01:06:29.421 [2024-12-09T06:05:24.007Z] =================================================================================================================== 01:06:29.421 [2024-12-09T06:05:24.007Z] Total : 17447.91 68.16 0.00 0.00 7327.96 4379.00 18469.24 01:06:29.421 { 01:06:29.421 "results": [ 01:06:29.421 { 01:06:29.421 "job": "nvme0n1", 01:06:29.421 "core_mask": "0x2", 01:06:29.421 "workload": "randread", 01:06:29.421 "status": "finished", 01:06:29.421 "queue_depth": 128, 01:06:29.421 "io_size": 4096, 01:06:29.421 "runtime": 2.004252, 01:06:29.421 "iops": 17447.90575237046, 01:06:29.421 "mibps": 68.15588184519711, 01:06:29.421 "io_failed": 0, 01:06:29.421 "io_timeout": 0, 01:06:29.421 "avg_latency_us": 7327.960475628461, 01:06:29.421 "min_latency_us": 4378.996363636364, 01:06:29.421 "max_latency_us": 18469.236363636363 01:06:29.421 } 01:06:29.421 ], 01:06:29.421 "core_count": 1 01:06:29.421 } 01:06:29.421 06:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:06:29.421 06:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:06:29.421 06:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:06:29.421 06:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:06:29.421 06:05:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:06:29.421 | select(.opcode=="crc32c") 01:06:29.421 | "\(.module_name) \(.executed)"' 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93236 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93236 ']' 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93236 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93236 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:29.988 killing process with pid 93236 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:29.988 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93236' 01:06:29.988 Received shutdown signal, test time was about 2.000000 seconds 01:06:29.988 01:06:29.988 Latency(us) 01:06:29.988 [2024-12-09T06:05:24.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:29.989 [2024-12-09T06:05:24.575Z] =================================================================================================================== 01:06:29.989 [2024-12-09T06:05:24.575Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93236 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93236 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93307 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93307 /var/tmp/bperf.sock 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93307 ']' 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:29.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:29.989 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:29.989 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:29.989 Zero copy mechanism will not be used. 01:06:29.989 [2024-12-09 06:05:24.490994] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:29.989 [2024-12-09 06:05:24.491097] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93307 ] 01:06:30.247 [2024-12-09 06:05:24.632913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:30.247 [2024-12-09 06:05:24.665736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:30.247 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:30.247 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:06:30.247 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:06:30.247 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:06:30.247 06:05:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:06:30.814 06:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:30.814 06:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:31.072 nvme0n1 01:06:31.072 06:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:06:31.072 06:05:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:31.072 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:31.072 Zero copy mechanism will not be used. 01:06:31.072 Running I/O for 2 seconds... 01:06:33.380 7247.00 IOPS, 905.88 MiB/s [2024-12-09T06:05:27.966Z] 7216.50 IOPS, 902.06 MiB/s 01:06:33.380 Latency(us) 01:06:33.380 [2024-12-09T06:05:27.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:33.380 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:06:33.380 nvme0n1 : 2.00 7213.18 901.65 0.00 0.00 2214.04 651.64 6285.50 01:06:33.380 [2024-12-09T06:05:27.966Z] =================================================================================================================== 01:06:33.380 [2024-12-09T06:05:27.966Z] Total : 7213.18 901.65 0.00 0.00 2214.04 651.64 6285.50 01:06:33.380 { 01:06:33.380 "results": [ 01:06:33.380 { 01:06:33.380 "job": "nvme0n1", 01:06:33.380 "core_mask": "0x2", 01:06:33.380 "workload": "randread", 01:06:33.380 "status": "finished", 01:06:33.380 "queue_depth": 16, 01:06:33.380 "io_size": 131072, 01:06:33.380 "runtime": 2.003138, 01:06:33.380 "iops": 7213.182516631406, 01:06:33.380 "mibps": 901.6478145789257, 01:06:33.380 "io_failed": 0, 01:06:33.380 "io_timeout": 0, 01:06:33.380 "avg_latency_us": 2214.03593781262, 01:06:33.380 "min_latency_us": 651.6363636363636, 01:06:33.380 "max_latency_us": 6285.498181818181 01:06:33.380 } 01:06:33.380 ], 01:06:33.380 "core_count": 1 01:06:33.380 } 01:06:33.380 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:06:33.380 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:06:33.380 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:06:33.381 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:06:33.381 | select(.opcode=="crc32c") 01:06:33.381 | "\(.module_name) \(.executed)"' 01:06:33.381 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93307 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93307 ']' 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93307 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:33.639 06:05:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93307 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:33.639 killing process with pid 93307 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93307' 01:06:33.639 Received shutdown signal, test time was about 2.000000 seconds 01:06:33.639 01:06:33.639 Latency(us) 01:06:33.639 [2024-12-09T06:05:28.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:33.639 [2024-12-09T06:05:28.225Z] =================================================================================================================== 01:06:33.639 [2024-12-09T06:05:28.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93307 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93307 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 01:06:33.639 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93384 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93384 /var/tmp/bperf.sock 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93384 ']' 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:33.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:33.640 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:33.640 [2024-12-09 06:05:28.201640] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:33.640 [2024-12-09 06:05:28.201761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93384 ] 01:06:33.898 [2024-12-09 06:05:28.346467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:33.898 [2024-12-09 06:05:28.379725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:33.898 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:33.898 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:06:33.898 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:06:33.898 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:06:33.898 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:06:34.182 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:34.182 06:05:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:34.748 nvme0n1 01:06:34.748 06:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:06:34.748 06:05:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:34.748 Running I/O for 2 seconds... 01:06:37.068 20794.00 IOPS, 81.23 MiB/s [2024-12-09T06:05:31.654Z] 21128.50 IOPS, 82.53 MiB/s 01:06:37.068 Latency(us) 01:06:37.068 [2024-12-09T06:05:31.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:37.068 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:37.068 nvme0n1 : 2.01 21144.97 82.60 0.00 0.00 6043.82 2532.07 11439.01 01:06:37.068 [2024-12-09T06:05:31.654Z] =================================================================================================================== 01:06:37.068 [2024-12-09T06:05:31.654Z] Total : 21144.97 82.60 0.00 0.00 6043.82 2532.07 11439.01 01:06:37.068 { 01:06:37.068 "results": [ 01:06:37.068 { 01:06:37.068 "job": "nvme0n1", 01:06:37.068 "core_mask": "0x2", 01:06:37.068 "workload": "randwrite", 01:06:37.068 "status": "finished", 01:06:37.068 "queue_depth": 128, 01:06:37.068 "io_size": 4096, 01:06:37.068 "runtime": 2.006718, 01:06:37.068 "iops": 21144.974032225753, 01:06:37.068 "mibps": 82.59755481338185, 01:06:37.068 "io_failed": 0, 01:06:37.068 "io_timeout": 0, 01:06:37.068 "avg_latency_us": 6043.81845605375, 01:06:37.068 "min_latency_us": 2532.072727272727, 01:06:37.068 "max_latency_us": 11439.01090909091 01:06:37.068 } 01:06:37.068 ], 01:06:37.068 "core_count": 1 01:06:37.068 } 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:06:37.068 | select(.opcode=="crc32c") 01:06:37.068 | "\(.module_name) \(.executed)"' 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93384 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93384 ']' 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93384 01:06:37.068 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:06:37.069 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:37.069 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93384 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:37.327 killing process with pid 93384 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93384' 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93384 01:06:37.327 Received shutdown signal, test time was about 2.000000 seconds 01:06:37.327 01:06:37.327 Latency(us) 01:06:37.327 [2024-12-09T06:05:31.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:37.327 [2024-12-09T06:05:31.913Z] =================================================================================================================== 01:06:37.327 [2024-12-09T06:05:31.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93384 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 01:06:37.327 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93456 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93456 /var/tmp/bperf.sock 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93456 ']' 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:37.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:37.328 06:05:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:37.328 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:37.328 Zero copy mechanism will not be used. 01:06:37.328 [2024-12-09 06:05:31.847119] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:37.328 [2024-12-09 06:05:31.847214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93456 ] 01:06:37.586 [2024-12-09 06:05:31.991109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:37.586 [2024-12-09 06:05:32.023762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:37.586 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:37.586 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 01:06:37.586 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 01:06:37.586 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 01:06:37.586 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:06:38.151 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:38.151 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:38.408 nvme0n1 01:06:38.408 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 01:06:38.408 06:05:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:38.408 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:38.408 Zero copy mechanism will not be used. 01:06:38.408 Running I/O for 2 seconds... 01:06:40.718 6418.00 IOPS, 802.25 MiB/s [2024-12-09T06:05:35.304Z] 6446.00 IOPS, 805.75 MiB/s 01:06:40.718 Latency(us) 01:06:40.718 [2024-12-09T06:05:35.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:40.718 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:06:40.718 nvme0n1 : 2.00 6442.78 805.35 0.00 0.00 2477.35 1884.16 11081.54 01:06:40.718 [2024-12-09T06:05:35.304Z] =================================================================================================================== 01:06:40.718 [2024-12-09T06:05:35.304Z] Total : 6442.78 805.35 0.00 0.00 2477.35 1884.16 11081.54 01:06:40.718 { 01:06:40.718 "results": [ 01:06:40.718 { 01:06:40.718 "job": "nvme0n1", 01:06:40.718 "core_mask": "0x2", 01:06:40.718 "workload": "randwrite", 01:06:40.718 "status": "finished", 01:06:40.718 "queue_depth": 16, 01:06:40.718 "io_size": 131072, 01:06:40.718 "runtime": 2.003482, 01:06:40.718 "iops": 6442.783114597486, 01:06:40.718 "mibps": 805.3478893246858, 01:06:40.718 "io_failed": 0, 01:06:40.718 "io_timeout": 0, 01:06:40.718 "avg_latency_us": 2477.346407865454, 01:06:40.718 "min_latency_us": 1884.16, 01:06:40.718 "max_latency_us": 11081.541818181819 01:06:40.718 } 01:06:40.718 ], 01:06:40.718 "core_count": 1 01:06:40.718 } 01:06:40.718 06:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 01:06:40.718 06:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 01:06:40.718 06:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 01:06:40.718 | select(.opcode=="crc32c") 01:06:40.718 | "\(.module_name) \(.executed)"' 01:06:40.718 06:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 01:06:40.718 06:05:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93456 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93456 ']' 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93456 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93456 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:40.718 killing process with pid 93456 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93456' 01:06:40.718 Received shutdown signal, test time was about 2.000000 seconds 01:06:40.718 01:06:40.718 Latency(us) 01:06:40.718 [2024-12-09T06:05:35.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:40.718 [2024-12-09T06:05:35.304Z] =================================================================================================================== 01:06:40.718 [2024-12-09T06:05:35.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93456 01:06:40.718 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93456 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93194 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93194 ']' 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93194 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93194 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:06:40.976 killing process with pid 93194 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93194' 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93194 01:06:40.976 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93194 01:06:41.234 01:06:41.234 real 0m15.280s 01:06:41.234 user 0m30.109s 01:06:41.234 sys 0m3.963s 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 01:06:41.234 ************************************ 01:06:41.234 END TEST nvmf_digest_clean 01:06:41.234 ************************************ 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:06:41.234 ************************************ 01:06:41.234 START TEST nvmf_digest_error 01:06:41.234 ************************************ 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=93555 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 93555 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93555 ']' 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:41.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:41.234 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:41.234 [2024-12-09 06:05:35.692178] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:41.234 [2024-12-09 06:05:35.692275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:41.503 [2024-12-09 06:05:35.838074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:41.503 [2024-12-09 06:05:35.869081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:41.503 [2024-12-09 06:05:35.869147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:41.503 [2024-12-09 06:05:35.869159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:41.503 [2024-12-09 06:05:35.869167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:41.503 [2024-12-09 06:05:35.869174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:41.503 [2024-12-09 06:05:35.869487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:41.503 06:05:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:41.503 [2024-12-09 06:05:35.997981] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 01:06:41.503 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:41.503 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 01:06:41.503 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 01:06:41.503 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:41.503 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:41.503 null0 01:06:41.503 [2024-12-09 06:05:36.073965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:41.769 [2024-12-09 06:05:36.098137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:06:41.769 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:41.769 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 01:06:41.769 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:06:41.769 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:06:41.769 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:06:41.769 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:06:41.769 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93586 01:06:41.769 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 01:06:41.770 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93586 /var/tmp/bperf.sock 01:06:41.770 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93586 ']' 01:06:41.770 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:41.770 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:41.770 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:41.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:41.770 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:41.770 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:41.770 [2024-12-09 06:05:36.163842] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:41.770 [2024-12-09 06:05:36.163954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93586 ] 01:06:41.770 [2024-12-09 06:05:36.327448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:42.040 [2024-12-09 06:05:36.375200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:42.040 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:42.040 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:06:42.040 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:42.040 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:42.298 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:06:42.298 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:42.298 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:42.298 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:42.298 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:42.298 06:05:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:42.556 nvme0n1 01:06:42.556 06:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:06:42.556 06:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:42.556 06:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:42.814 06:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:42.814 06:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:06:42.814 06:05:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:42.814 Running I/O for 2 seconds... 01:06:42.814 [2024-12-09 06:05:37.284494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.814 [2024-12-09 06:05:37.284575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.814 [2024-12-09 06:05:37.284592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:42.814 [2024-12-09 06:05:37.298800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.814 [2024-12-09 06:05:37.298876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.814 [2024-12-09 06:05:37.298892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:42.814 [2024-12-09 06:05:37.313245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.814 [2024-12-09 06:05:37.313316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.814 [2024-12-09 06:05:37.313333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:42.814 [2024-12-09 06:05:37.326536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.814 [2024-12-09 06:05:37.326858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.814 [2024-12-09 06:05:37.326879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:42.814 [2024-12-09 06:05:37.340379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.814 [2024-12-09 06:05:37.340452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.814 [2024-12-09 06:05:37.340469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:42.814 [2024-12-09 06:05:37.353861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.815 [2024-12-09 06:05:37.354170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.815 [2024-12-09 06:05:37.354193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:42.815 [2024-12-09 06:05:37.368395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.815 [2024-12-09 06:05:37.368467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.815 [2024-12-09 06:05:37.368484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:42.815 [2024-12-09 06:05:37.382380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.815 [2024-12-09 06:05:37.382453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.815 [2024-12-09 06:05:37.382468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:42.815 [2024-12-09 06:05:37.396360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:42.815 [2024-12-09 06:05:37.396431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:42.815 [2024-12-09 06:05:37.396447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.410529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.410596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.410612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.425296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.425529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.425550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.439910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.439977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.439992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.454747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.454808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.454824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.469199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.469458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.469478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.483740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.483791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.483807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.499934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.500005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.500023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.514293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.514353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.514369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.526428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.526489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.526505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.541397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.541455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.541471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.554578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.554671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.554690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.569776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.570047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.570066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.584288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.584344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.584359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.598252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.598311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.598326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.612199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.612260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.612275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.627509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.627567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.627583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.642124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.642184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.642199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.073 [2024-12-09 06:05:37.656263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.073 [2024-12-09 06:05:37.656330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.073 [2024-12-09 06:05:37.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.332 [2024-12-09 06:05:37.670454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.332 [2024-12-09 06:05:37.670512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.332 [2024-12-09 06:05:37.670527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.332 [2024-12-09 06:05:37.684944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.332 [2024-12-09 06:05:37.685155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.332 [2024-12-09 06:05:37.685175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.332 [2024-12-09 06:05:37.700057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.332 [2024-12-09 06:05:37.700302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.332 [2024-12-09 06:05:37.700324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.332 [2024-12-09 06:05:37.714341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.332 [2024-12-09 06:05:37.714413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.332 [2024-12-09 06:05:37.714430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.332 [2024-12-09 06:05:37.728300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.332 [2024-12-09 06:05:37.728365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.332 [2024-12-09 06:05:37.728381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.332 [2024-12-09 06:05:37.742257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.332 [2024-12-09 06:05:37.742311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.332 [2024-12-09 06:05:37.742326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.332 [2024-12-09 06:05:37.756429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.332 [2024-12-09 06:05:37.756496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.332 [2024-12-09 06:05:37.756512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.332 [2024-12-09 06:05:37.771225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.332 [2024-12-09 06:05:37.771496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.332 [2024-12-09 06:05:37.771517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.785690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.785758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.785774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.799826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.799895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.799911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.811982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.812050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.812067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.828185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.828252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.828268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.842611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.842914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.842936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.856904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.856970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.856985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.871549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.871613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.871628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.886592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.886905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.886927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.901257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.901326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.901343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.333 [2024-12-09 06:05:37.915628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.333 [2024-12-09 06:05:37.915713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.333 [2024-12-09 06:05:37.915729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:37.927739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:37.927804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:37.927820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:37.942075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:37.942375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:37.942395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:37.957460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:37.957727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:37.957748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:37.972363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:37.972437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:37.972455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:37.987526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:37.987599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:37.987615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.001821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.001900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.001916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.016896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.017138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.017158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.031106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.031161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.031177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.045126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.045187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.045201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.059180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.059239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.059254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.074191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.074266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.074286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.089018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.089087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.089103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.103530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.103602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.103619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.118207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.118273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.118289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.133036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.133109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.133125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.148297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.148613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.148638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.592 [2024-12-09 06:05:38.161945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.592 [2024-12-09 06:05:38.162047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.592 [2024-12-09 06:05:38.162075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.178375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.178450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.178466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.192667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.192736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.192752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.206826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.206897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.206913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.221060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.221129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.221145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.235240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.235310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.235326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.251831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.252088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.252109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 17466.00 IOPS, 68.23 MiB/s [2024-12-09T06:05:38.437Z] [2024-12-09 06:05:38.267712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.267776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.267792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.280761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.280830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.280846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.297612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.297692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.851 [2024-12-09 06:05:38.297709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.851 [2024-12-09 06:05:38.311733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.851 [2024-12-09 06:05:38.311799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.852 [2024-12-09 06:05:38.311815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.852 [2024-12-09 06:05:38.323948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.852 [2024-12-09 06:05:38.324012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.852 [2024-12-09 06:05:38.324028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.852 [2024-12-09 06:05:38.339443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.852 [2024-12-09 06:05:38.339729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.852 [2024-12-09 06:05:38.339749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.852 [2024-12-09 06:05:38.393619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.852 [2024-12-09 06:05:38.393707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.852 [2024-12-09 06:05:38.393725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.852 [2024-12-09 06:05:38.408561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.852 [2024-12-09 06:05:38.408632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.852 [2024-12-09 06:05:38.408667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:43.852 [2024-12-09 06:05:38.422695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:43.852 [2024-12-09 06:05:38.422759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:43.852 [2024-12-09 06:05:38.422775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.436690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.436752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.436768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.451488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.451557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.451573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.465782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.466051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.466070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.480624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.480704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.480719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.494246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.494314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.494330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.509379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.509662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.509684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.524499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.524564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.524580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.539665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.539734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.539750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.554032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.554094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.554110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.566847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.566917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.566932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.579908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.579974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.579989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.594755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.594850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.594870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.608576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.608636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.608664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.622417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.622472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.622488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.636215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.636276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.636292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.649374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.649605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.649627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.664082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.664151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.664167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.678200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.678263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.678279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.111 [2024-12-09 06:05:38.692406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.111 [2024-12-09 06:05:38.692477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.111 [2024-12-09 06:05:38.692494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.707362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.707433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.707450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.722141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.722197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.722213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.738630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.738730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.738748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.753863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.753933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.753948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.769856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.769930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.769947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.783413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.783481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.783496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.798431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.798499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.798515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.812587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.812673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.812691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.827428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.827487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.827502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.842673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.842736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.842751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.857984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.858050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.858066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.871229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.871292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.871309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.886367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.886437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.886454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.901232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.901294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.901310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.915561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.915626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.915642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.929687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.929746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.929761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.370 [2024-12-09 06:05:38.943848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.370 [2024-12-09 06:05:38.943916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.370 [2024-12-09 06:05:38.943932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:38.958181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:38.958245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:38.958259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:38.972859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:38.972912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:38.972928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:38.988048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:38.988104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:38.988119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.000820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.000876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.000890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.014995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.015053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.015068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.029605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.029680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.029696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.042784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.042840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.042855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.058205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.058266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.058282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.073410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.073471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.073486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.089277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.089338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.089353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.105116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.105170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.105185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.120846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.120900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.120914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.137355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.137409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.137423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.153056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.153107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.153122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.168859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.168919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.168933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.184558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.184617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.184632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.628 [2024-12-09 06:05:39.200340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.628 [2024-12-09 06:05:39.200394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.628 [2024-12-09 06:05:39.200409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.886 [2024-12-09 06:05:39.216119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.886 [2024-12-09 06:05:39.216173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.886 [2024-12-09 06:05:39.216187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.886 [2024-12-09 06:05:39.231877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.886 [2024-12-09 06:05:39.231929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.886 [2024-12-09 06:05:39.231944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.886 [2024-12-09 06:05:39.247615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.886 [2024-12-09 06:05:39.247684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.886 [2024-12-09 06:05:39.247699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.886 17026.50 IOPS, 66.51 MiB/s [2024-12-09T06:05:39.472Z] [2024-12-09 06:05:39.263745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea42d0) 01:06:44.886 [2024-12-09 06:05:39.263803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:44.886 [2024-12-09 06:05:39.263817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:44.886 01:06:44.886 Latency(us) 01:06:44.886 [2024-12-09T06:05:39.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:44.886 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:06:44.886 nvme0n1 : 2.01 17045.34 66.58 0.00 0.00 7501.01 3783.21 55765.18 01:06:44.886 [2024-12-09T06:05:39.472Z] =================================================================================================================== 01:06:44.886 [2024-12-09T06:05:39.472Z] Total : 17045.34 66.58 0.00 0.00 7501.01 3783.21 55765.18 01:06:44.886 { 01:06:44.886 "results": [ 01:06:44.886 { 01:06:44.886 "job": "nvme0n1", 01:06:44.886 "core_mask": "0x2", 01:06:44.886 "workload": "randread", 01:06:44.886 "status": "finished", 01:06:44.886 "queue_depth": 128, 01:06:44.886 "io_size": 4096, 01:06:44.886 "runtime": 2.005299, 01:06:44.886 "iops": 17045.33837597286, 01:06:44.886 "mibps": 66.58335303114399, 01:06:44.886 "io_failed": 0, 01:06:44.886 "io_timeout": 0, 01:06:44.886 "avg_latency_us": 7501.0052542747035, 01:06:44.886 "min_latency_us": 3783.2145454545453, 01:06:44.886 "max_latency_us": 55765.178181818184 01:06:44.886 } 01:06:44.886 ], 01:06:44.886 "core_count": 1 01:06:44.886 } 01:06:44.886 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:06:44.886 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:06:44.886 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:06:44.886 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:06:44.886 | .driver_specific 01:06:44.886 | .nvme_error 01:06:44.886 | .status_code 01:06:44.886 | .command_transient_transport_error' 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 134 > 0 )) 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93586 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93586 ']' 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93586 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93586 01:06:45.145 killing process with pid 93586 01:06:45.145 Received shutdown signal, test time was about 2.000000 seconds 01:06:45.145 01:06:45.145 Latency(us) 01:06:45.145 [2024-12-09T06:05:39.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:45.145 [2024-12-09T06:05:39.731Z] =================================================================================================================== 01:06:45.145 [2024-12-09T06:05:39.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93586' 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93586 01:06:45.145 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93586 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93657 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93657 /var/tmp/bperf.sock 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93657 ']' 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:45.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:45.404 06:05:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:45.404 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:45.404 Zero copy mechanism will not be used. 01:06:45.404 [2024-12-09 06:05:39.831172] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:45.404 [2024-12-09 06:05:39.831273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93657 ] 01:06:45.404 [2024-12-09 06:05:39.976168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:45.663 [2024-12-09 06:05:40.009342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:45.663 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:45.663 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:06:45.663 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:45.663 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:45.922 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:06:45.922 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:45.922 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:45.922 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:45.922 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:45.922 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:46.181 nvme0n1 01:06:46.181 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:06:46.181 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:46.181 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:46.181 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:46.181 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:06:46.181 06:05:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:46.441 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:46.441 Zero copy mechanism will not be used. 01:06:46.441 Running I/O for 2 seconds... 01:06:46.441 [2024-12-09 06:05:40.954318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.954406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.954423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.958127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.958187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.958202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.962720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.962790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.962806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.968280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.968353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.968370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.973764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.973833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.973850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.977336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.977390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.977405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.981908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.981972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.981988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.985357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.985404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.985418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.989321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.989375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.989390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.993762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.993818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.993833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:40.998123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:40.998172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:40.998187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:41.001763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:41.001814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:41.001829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:41.006247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:41.006313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:41.006328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:41.009922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:41.009969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:41.009984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:41.014446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:41.014493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:41.014508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:41.019039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:41.019085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:41.019099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.441 [2024-12-09 06:05:41.022454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.441 [2024-12-09 06:05:41.022499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.441 [2024-12-09 06:05:41.022513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.027090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.027135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.027149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.031597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.031642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.031669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.035204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.035250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.035264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.039924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.039985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.040001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.043260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.043309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.043323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.047512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.047570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.047584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.052448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.052508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.052523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.057465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.057527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.057541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.062803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.062858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.062872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.067365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.715 [2024-12-09 06:05:41.067423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.715 [2024-12-09 06:05:41.067439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.715 [2024-12-09 06:05:41.070275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.070325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.070339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.075369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.075430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.075446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.080878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.080942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.080957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.086146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.086215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.086230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.090656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.090717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.090732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.094230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.094277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.094291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.098559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.098613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.098628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.103614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.103681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.103696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.108375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.108435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.108449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.113470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.113527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.113542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.118297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.118343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.118358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.123307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.123363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.123378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.127945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.127990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.128005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.132211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.132258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.132273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.136555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.136602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.136616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.141237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.141289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.141304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.145483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.145529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.145543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.149862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.149909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.149923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.153247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.153290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.153304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.157891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.157941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.157956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.162615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.162683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.162699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.165991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.166039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.166055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.171368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.171428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.171443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.175686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.175735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.175750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.180583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.180642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.180670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.185442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.185493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.185509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.716 [2024-12-09 06:05:41.190517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.716 [2024-12-09 06:05:41.190580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.716 [2024-12-09 06:05:41.190595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.195466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.195530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.195545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.198476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.198525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.198539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.203414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.203476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.203491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.208722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.208779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.208794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.214068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.214121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.214136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.217564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.217606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.217620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.222081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.222132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.222147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.227339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.227393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.227409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.232662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.232714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.232730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.237683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.237730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.237745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.240865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.240908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.240922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.246184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.246234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.246249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.251315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.251361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.251375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.256225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.256273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.256287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.259918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.259967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.259980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.264048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.264091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.264104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.268883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.268926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.268941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.272406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.272446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.272459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.277122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.277165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.277180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.281183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.281224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.281237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.285407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.285446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.285459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.289414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.289452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.289465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.293365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.293401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.293414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.717 [2024-12-09 06:05:41.296884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.717 [2024-12-09 06:05:41.296921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.717 [2024-12-09 06:05:41.296935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.301434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.301473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.301487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.305466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.305506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.305519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.309384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.309424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.309437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.313625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.313676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.313691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.317841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.317883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.317897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.321333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.321372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.321387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.325482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.325525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.325539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.329253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.329295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.329309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.333534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.333583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.333598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.337820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.337863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.337877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.341879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.341922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.341936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.345879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.345925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.345939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.350502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.350555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.350569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.355324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.355380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.355395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.358274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.358316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.358330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.363685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.363730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.363744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.367071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.367121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.367135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.371521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.371572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.371586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.376581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.376623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.376637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.381928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.381969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.381984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.385232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.978 [2024-12-09 06:05:41.385275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.978 [2024-12-09 06:05:41.385289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.978 [2024-12-09 06:05:41.389698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.389751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.389766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.394795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.394850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.394864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.400155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.400207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.400221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.403946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.403986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.403999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.408328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.408367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.408382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.413351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.413391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.413406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.418586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.418630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.418668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.421709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.421745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.421758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.425836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.425879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.425894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.431015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.431068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.431083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.436371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.436433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.436448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.441348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.441397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.441411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.446097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.446146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.446160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.449356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.449399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.449412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.454754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.454799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.454813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.460003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.460057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.460072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.465500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.465559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.465574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.469242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.469290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.469304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.473909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.473959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.473973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.479165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.479214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.479229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.484185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.484235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.484249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.487534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.487579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.487593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.493016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.493074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.493088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.498304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.498365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.498380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.502095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.502153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.502168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.505982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.506034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.979 [2024-12-09 06:05:41.506048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.979 [2024-12-09 06:05:41.510583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.979 [2024-12-09 06:05:41.510630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.510657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.513766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.513806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.513820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.518506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.518551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.518566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.523202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.523244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.523257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.528339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.528391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.528407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.533064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.533114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.533128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.536339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.536385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.536398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.540675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.540726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.540740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.544536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.544580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.544594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.548571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.548617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.548631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.552959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.553007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.553021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.556091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.556131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.556145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:46.980 [2024-12-09 06:05:41.560280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:46.980 [2024-12-09 06:05:41.560330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:46.980 [2024-12-09 06:05:41.560344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.240 [2024-12-09 06:05:41.564261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.240 [2024-12-09 06:05:41.564315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.240 [2024-12-09 06:05:41.564330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.240 [2024-12-09 06:05:41.568311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.240 [2024-12-09 06:05:41.568359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.240 [2024-12-09 06:05:41.568373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.240 [2024-12-09 06:05:41.571804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.571845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.571859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.576723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.576771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.576785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.581438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.581483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.581497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.586534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.586581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.586596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.590680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.590720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.590734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.593828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.593869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.593883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.599090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.599142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.599157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.604063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.604107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.604122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.607353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.607393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.607406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.612352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.612398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.612414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.617681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.617740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.621721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.621764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.621777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.625265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.625306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.625320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.630027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.630068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.630081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.634884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.634924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.634938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.639909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.639952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.639966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.643525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.643564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.643577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.647963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.648002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.648015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.653197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.653240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.653255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.658578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.658627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.658641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.663741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.663790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.663805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.666857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.666895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.666908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.672116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.672166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.672180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.677156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.677199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.677213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.682562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.682607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.682620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.241 [2024-12-09 06:05:41.686279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.241 [2024-12-09 06:05:41.686318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.241 [2024-12-09 06:05:41.686332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.690793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.690831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.690844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.696007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.696048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.696061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.700381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.700420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.700433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.705005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.705042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.705056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.708177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.708214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.708227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.712509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.712545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.712558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.716314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.716352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.716366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.720112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.720150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.720164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.724020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.724058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.724071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.728318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.728357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.728370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.732715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.732752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.732765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.737077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.737117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.737130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.740934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.740971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.740984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.745038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.745075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.745089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.748603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.748640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.748668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.752232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.752268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.752281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.756531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.756568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.756582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.760701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.760737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.760750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.764020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.764057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.764069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.768910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.768948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.768961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.774165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.774206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.774220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.778718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.778756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.778769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.781866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.781907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.781920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.787152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.787190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.787203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.792307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.792345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.792359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.797405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.797443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.242 [2024-12-09 06:05:41.797457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.242 [2024-12-09 06:05:41.800819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.242 [2024-12-09 06:05:41.800856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.243 [2024-12-09 06:05:41.800869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.243 [2024-12-09 06:05:41.804983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.243 [2024-12-09 06:05:41.805020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.243 [2024-12-09 06:05:41.805033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.243 [2024-12-09 06:05:41.809892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.243 [2024-12-09 06:05:41.809936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.243 [2024-12-09 06:05:41.809950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.243 [2024-12-09 06:05:41.814349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.243 [2024-12-09 06:05:41.814386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.243 [2024-12-09 06:05:41.814399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.243 [2024-12-09 06:05:41.818852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.243 [2024-12-09 06:05:41.818910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.243 [2024-12-09 06:05:41.818924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.243 [2024-12-09 06:05:41.823738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.243 [2024-12-09 06:05:41.823774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.243 [2024-12-09 06:05:41.823787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.503 [2024-12-09 06:05:41.828285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.503 [2024-12-09 06:05:41.828321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.503 [2024-12-09 06:05:41.828335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.503 [2024-12-09 06:05:41.833436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.503 [2024-12-09 06:05:41.833473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.503 [2024-12-09 06:05:41.833486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.503 [2024-12-09 06:05:41.837113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.503 [2024-12-09 06:05:41.837150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.503 [2024-12-09 06:05:41.837163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.503 [2024-12-09 06:05:41.841591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.503 [2024-12-09 06:05:41.841631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.503 [2024-12-09 06:05:41.841657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.503 [2024-12-09 06:05:41.846686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.846727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.846742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.850111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.850149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.850162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.854351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.854392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.854405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.858454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.858494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.858507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.861922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.861960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.861973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.866412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.866450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.866464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.870848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.870889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.870902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.874712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.874752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.874765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.879171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.879216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.879229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.882552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.882595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.882608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.887761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.887800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.887814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.892957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.892999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.893013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.896640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.896688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.896701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.901128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.901166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.901180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.906189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.906232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.906245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.911138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.911181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.911195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.914313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.914350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.914363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.919672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.919715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.919728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.924862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.924907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.924920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.929359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.929409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.929423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.932707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.932748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.932762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.937876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.937918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.937932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.941940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.941977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.941991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.504 6986.00 IOPS, 873.25 MiB/s [2024-12-09T06:05:42.090Z] [2024-12-09 06:05:41.948290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.948336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.948350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.953590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.953634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.953662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.958615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.504 [2024-12-09 06:05:41.958685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.504 [2024-12-09 06:05:41.958699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.504 [2024-12-09 06:05:41.961841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.961881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.961894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.965986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.966024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.966037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.970807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.970846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.970860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.973917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.973952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.973966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.978007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.978045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.978058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.981940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.981978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.981992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.986307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.986346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.986359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.989795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.989833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.989847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.994183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.994222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.994236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:41.998449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:41.998486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:41.998499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.002052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.002088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.002101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.006334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.006372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.006385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.010807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.010845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.010859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.014039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.014076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.014088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.018601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.018639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.018675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.023673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.023712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.023725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.027236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.027274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.027287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.031666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.031705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.031718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.036906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.036950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.036965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.040405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.040443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.040457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.044977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.045016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.045030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.049750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.049788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.049802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.055130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.055170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.055184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.058239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.058276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.058289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.062781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.062834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.062847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.067448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.505 [2024-12-09 06:05:42.067494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.505 [2024-12-09 06:05:42.067507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.505 [2024-12-09 06:05:42.071608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.506 [2024-12-09 06:05:42.071658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.506 [2024-12-09 06:05:42.071673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.506 [2024-12-09 06:05:42.076783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.506 [2024-12-09 06:05:42.076831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.506 [2024-12-09 06:05:42.076845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.506 [2024-12-09 06:05:42.081481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.506 [2024-12-09 06:05:42.081535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.506 [2024-12-09 06:05:42.081550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.506 [2024-12-09 06:05:42.084772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.506 [2024-12-09 06:05:42.084811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.506 [2024-12-09 06:05:42.084825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.089750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.089793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.089806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.095082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.095133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.095149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.099977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.100022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.100035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.104416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.104457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.104471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.109697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.109744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.109759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.114026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.114075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.114089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.117220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.117263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.117276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.122222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.122279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.122294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.127586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.127637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.127665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.132252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.132303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.132317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.135470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.135516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.135530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.139777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.139828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.139843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.143918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.143979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.143994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.147668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.147722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.147736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.151534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.151596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.151611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.156835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.156897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.156911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.162104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.162153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.162167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.166852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.166910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.166924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.169909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.169955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.169968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.175099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.175161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.175175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.180425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.180484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.180498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.185520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.766 [2024-12-09 06:05:42.185568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.766 [2024-12-09 06:05:42.185582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.766 [2024-12-09 06:05:42.189036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.189082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.189096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.193406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.193457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.193471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.198467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.198526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.198540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.203351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.203411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.203426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.206371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.206414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.206427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.211454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.211508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.211522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.215972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.216021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.216034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.221155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.221203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.221217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.225737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.225773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.225786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.230745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.230787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.230800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.235551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.235599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.235613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.238317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.238355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.238368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.243152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.243203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.243219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.248281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.248338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.248353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.252785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.252840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.252855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.256140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.256185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.256199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.260836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.260885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.260899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.265598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.265669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.265685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.269099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.269144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.269158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.273428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.273482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.273496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.277840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.277888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.277902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.282253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.282302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.282318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.286946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.286995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.287008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.291463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.291515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.291529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.296785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.296839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.296854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.302107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.302180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.302195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.306787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.767 [2024-12-09 06:05:42.306843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.767 [2024-12-09 06:05:42.306858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.767 [2024-12-09 06:05:42.309476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.309516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.309530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.314634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.314715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.314731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.318078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.318125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.318139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.322263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.322321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.322335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.326639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.326718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.326732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.330531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.330582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.334593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.334673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.334690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.339102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.339160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.339176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.342941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.342993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.343007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:47.768 [2024-12-09 06:05:42.348183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:47.768 [2024-12-09 06:05:42.348251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:47.768 [2024-12-09 06:05:42.348266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.353227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.353293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.353309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.356172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.356220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.356235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.361307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.361370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.361385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.364509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.364557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.364571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.368673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.368723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.368737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.373509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.373569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.373583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.378089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.378153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.378168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.382994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.383055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.383070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.386153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.386202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.386217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.391196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.391252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.391266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.395915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.395973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.395987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.400820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.400870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.400884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.406207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.406261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.406276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.411795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.411852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.411867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.415008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.415054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.415067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.419329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.419378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.419392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.028 [2024-12-09 06:05:42.424070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.028 [2024-12-09 06:05:42.424119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.028 [2024-12-09 06:05:42.424133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.428293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.428342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.428356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.432815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.432867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.432881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.437507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.437562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.437577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.443138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.443210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.443226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.448801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.448866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.448881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.452403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.452463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.452477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.456998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.457057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.457070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.462251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.462316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.462331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.467845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.467912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.467927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.472489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.472551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.472565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.476115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.476171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.476186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.480515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.480571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.480586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.485537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.485605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.485620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.490753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.490820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.490837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.495898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.495957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.495973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.499256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.499304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.499318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.504065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.504128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.504143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.509184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.509252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.509267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.514406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.514473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.514488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.518170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.518226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.518241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.522717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.522767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.522781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.527903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.527964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.527978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.533027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.533081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.533096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.537638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.537697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.537711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.541145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.541186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.541201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.545599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.545662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.545678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.029 [2024-12-09 06:05:42.550127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.029 [2024-12-09 06:05:42.550179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.029 [2024-12-09 06:05:42.550194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.553979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.554026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.554040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.557683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.557724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.557737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.561696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.561742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.561755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.565770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.565814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.565828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.569904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.569947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.569960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.573934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.573984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.573997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.578070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.578116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.578131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.582410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.582454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.582467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.586018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.586059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.586073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.589753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.589795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.589809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.594010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.594054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.594068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.598773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.598820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.598834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.603406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.603449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.603463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.606706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.606757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.030 [2024-12-09 06:05:42.611520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.030 [2024-12-09 06:05:42.611565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.030 [2024-12-09 06:05:42.611579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.615809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.615845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.615858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.620819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.620862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.620875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.624241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.624278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.624291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.628636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.628688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.628702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.633826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.633862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.633875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.638703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.638741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.638755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.641587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.641623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.641635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.646355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.646392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.646405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.650100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.650138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.650151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.654354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.654394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.654408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.658740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.658778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.658791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.662534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.662573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.662586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.666855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.666894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.666908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.670720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.670758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.670771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.674487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.674524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.674537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.678259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.678296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.678310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.682631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.682690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.682704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.687508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.687546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.687559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.690989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.691026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.691039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.695198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.695235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.695249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.700310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.700353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.700367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.705336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.705376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.705389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.708446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.708484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.708497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.713236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.713276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.713290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.718562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.718601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.718615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.722669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.722705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.722718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.726091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.726132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.726146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.730402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.730445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.730459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.734889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.734931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.734945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.739614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.739665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.739681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.744236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.744275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.744288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.748925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.748963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.748976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.753168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.753207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.753220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.758044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.758085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.758100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.763157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.763201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.763214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.767638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.767686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.767700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.771298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.771336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.771349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.775214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.775252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.775265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.779047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.779086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.779100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.783714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.291 [2024-12-09 06:05:42.783759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.291 [2024-12-09 06:05:42.783772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.291 [2024-12-09 06:05:42.787488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.787528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.787541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.791594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.791639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.791669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.795862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.795901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.795915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.799586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.799625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.799639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.803775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.803816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.803830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.807761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.807803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.807816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.812118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.812160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.812174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.815707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.815747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.815760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.820055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.820097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.820111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.824091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.824133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.824146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.828609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.828665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.828681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.831914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.831954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.831967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.836452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.836498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.836512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.839751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.839794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.839808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.844052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.844098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.844112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.849230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.849279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.849293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.854329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.854379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.854393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.857151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.857189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.857203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.861380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.861427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.861440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.864951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.864999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.865014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.868690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.868739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.868752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.292 [2024-12-09 06:05:42.873062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.292 [2024-12-09 06:05:42.873111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.292 [2024-12-09 06:05:42.873125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.877020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.877069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.877083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.881773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.881815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.881829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.885838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.885878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.885892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.889492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.889537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.889551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.893636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.893691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.893705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.898882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.898933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.898947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.903961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.904008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.904023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.907126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.907161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.907174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.911641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.911695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.911710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.916784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.916836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.916850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.921598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.921663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.921680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.924795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.924836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.924850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.930204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.930252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.551 [2024-12-09 06:05:42.930266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:48.551 [2024-12-09 06:05:42.935379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.551 [2024-12-09 06:05:42.935426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.552 [2024-12-09 06:05:42.935440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:48.552 [2024-12-09 06:05:42.940343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.552 [2024-12-09 06:05:42.940385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.552 [2024-12-09 06:05:42.940400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:06:48.552 [2024-12-09 06:05:42.944510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb26580) 01:06:48.552 [2024-12-09 06:05:42.944553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:48.552 [2024-12-09 06:05:42.944566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:48.552 7037.50 IOPS, 879.69 MiB/s 01:06:48.552 Latency(us) 01:06:48.552 [2024-12-09T06:05:43.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:48.552 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 01:06:48.552 nvme0n1 : 2.00 7034.25 879.28 0.00 0.00 2270.64 618.12 6911.07 01:06:48.552 [2024-12-09T06:05:43.138Z] =================================================================================================================== 01:06:48.552 [2024-12-09T06:05:43.138Z] Total : 7034.25 879.28 0.00 0.00 2270.64 618.12 6911.07 01:06:48.552 { 01:06:48.552 "results": [ 01:06:48.552 { 01:06:48.552 "job": "nvme0n1", 01:06:48.552 "core_mask": "0x2", 01:06:48.552 "workload": "randread", 01:06:48.552 "status": "finished", 01:06:48.552 "queue_depth": 16, 01:06:48.552 "io_size": 131072, 01:06:48.552 "runtime": 2.003199, 01:06:48.552 "iops": 7034.248719173682, 01:06:48.552 "mibps": 879.2810898967102, 01:06:48.552 "io_failed": 0, 01:06:48.552 "io_timeout": 0, 01:06:48.552 "avg_latency_us": 2270.6410038644913, 01:06:48.552 "min_latency_us": 618.1236363636364, 01:06:48.552 "max_latency_us": 6911.069090909091 01:06:48.552 } 01:06:48.552 ], 01:06:48.552 "core_count": 1 01:06:48.552 } 01:06:48.552 06:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:06:48.552 06:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:06:48.552 | .driver_specific 01:06:48.552 | .nvme_error 01:06:48.552 | .status_code 01:06:48.552 | .command_transient_transport_error' 01:06:48.552 06:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:06:48.552 06:05:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 454 > 0 )) 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93657 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93657 ']' 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93657 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93657 01:06:48.811 killing process with pid 93657 01:06:48.811 Received shutdown signal, test time was about 2.000000 seconds 01:06:48.811 01:06:48.811 Latency(us) 01:06:48.811 [2024-12-09T06:05:43.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:48.811 [2024-12-09T06:05:43.397Z] =================================================================================================================== 01:06:48.811 [2024-12-09T06:05:43.397Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93657' 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93657 01:06:48.811 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93657 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93734 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93734 /var/tmp/bperf.sock 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93734 ']' 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:49.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:49.069 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:49.069 [2024-12-09 06:05:43.546078] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:49.069 [2024-12-09 06:05:43.546183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93734 ] 01:06:49.327 [2024-12-09 06:05:43.692330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:49.327 [2024-12-09 06:05:43.725422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:49.327 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:49.327 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:06:49.327 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:49.327 06:05:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:49.584 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:06:49.584 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:49.584 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:49.584 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:49.584 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:49.584 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:50.149 nvme0n1 01:06:50.149 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 01:06:50.149 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:50.149 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:50.149 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:50.149 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:06:50.149 06:05:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:50.149 Running I/O for 2 seconds... 01:06:50.149 [2024-12-09 06:05:44.715826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeee38 01:06:50.149 [2024-12-09 06:05:44.717267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.150 [2024-12-09 06:05:44.717314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:06:50.150 [2024-12-09 06:05:44.727533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016edf118 01:06:50.150 [2024-12-09 06:05:44.728670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.150 [2024-12-09 06:05:44.728709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.739705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee7818 01:06:50.413 [2024-12-09 06:05:44.740810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.740854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.755237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efda78 01:06:50.413 [2024-12-09 06:05:44.757071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.757113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.767507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef0ff8 01:06:50.413 [2024-12-09 06:05:44.769278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.769317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.778980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee0630 01:06:50.413 [2024-12-09 06:05:44.780813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.780854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.790032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef9f68 01:06:50.413 [2024-12-09 06:05:44.791356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.791394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.801898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eff3c8 01:06:50.413 [2024-12-09 06:05:44.803201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.803238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.817884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eee5c8 01:06:50.413 [2024-12-09 06:05:44.819885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.819934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.827041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef31b8 01:06:50.413 [2024-12-09 06:05:44.828053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.828100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.842718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efb048 01:06:50.413 [2024-12-09 06:05:44.844224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.844266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.854107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efc998 01:06:50.413 [2024-12-09 06:05:44.855437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.855474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.865514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee9e10 01:06:50.413 [2024-12-09 06:05:44.866709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.866745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.876997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ede038 01:06:50.413 [2024-12-09 06:05:44.878041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.878077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.888520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee2c28 01:06:50.413 [2024-12-09 06:05:44.889376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.889408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:06:50.413 [2024-12-09 06:05:44.903592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeff18 01:06:50.413 [2024-12-09 06:05:44.905454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.413 [2024-12-09 06:05:44.905490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:06:50.414 [2024-12-09 06:05:44.912396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eed0b0 01:06:50.414 [2024-12-09 06:05:44.913418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.414 [2024-12-09 06:05:44.913450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:50.414 [2024-12-09 06:05:44.926858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efeb58 01:06:50.414 [2024-12-09 06:05:44.928572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.414 [2024-12-09 06:05:44.928609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:06:50.414 [2024-12-09 06:05:44.939085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eddc00 01:06:50.414 [2024-12-09 06:05:44.940927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.414 [2024-12-09 06:05:44.940965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:06:50.414 [2024-12-09 06:05:44.951180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef7970 01:06:50.414 [2024-12-09 06:05:44.952756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.414 [2024-12-09 06:05:44.952790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:06:50.414 [2024-12-09 06:05:44.962822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee3060 01:06:50.414 [2024-12-09 06:05:44.964047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.414 [2024-12-09 06:05:44.964084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:06:50.414 [2024-12-09 06:05:44.974726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee6b70 01:06:50.414 [2024-12-09 06:05:44.976101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.414 [2024-12-09 06:05:44.976137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:06:50.414 [2024-12-09 06:05:44.986787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee49b0 01:06:50.414 [2024-12-09 06:05:44.987682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.414 [2024-12-09 06:05:44.987709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:44.998451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef0350 01:06:50.673 [2024-12-09 06:05:44.999249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:44.999284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.009923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef8a50 01:06:50.673 [2024-12-09 06:05:45.010489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.010522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.023613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeb760 01:06:50.673 [2024-12-09 06:05:45.025032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.025069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.036169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef2510 01:06:50.673 [2024-12-09 06:05:45.037917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.037955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.044809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee7818 01:06:50.673 [2024-12-09 06:05:45.045557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.045596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.059368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeaab8 01:06:50.673 [2024-12-09 06:05:45.060825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.060864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.070604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efb048 01:06:50.673 [2024-12-09 06:05:45.071813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.071854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.082386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eee5c8 01:06:50.673 [2024-12-09 06:05:45.083374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.083412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.093757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef1ca0 01:06:50.673 [2024-12-09 06:05:45.094538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.094574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.108895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef9b30 01:06:50.673 [2024-12-09 06:05:45.110737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.110779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.120449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eef6a8 01:06:50.673 [2024-12-09 06:05:45.122117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.122156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.130967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeee38 01:06:50.673 [2024-12-09 06:05:45.131984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.132021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.142413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee1b48 01:06:50.673 [2024-12-09 06:05:45.143261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.143302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.154219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efb048 01:06:50.673 [2024-12-09 06:05:45.155046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.155085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.168630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee38d0 01:06:50.673 [2024-12-09 06:05:45.170132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.170166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.179807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee4de8 01:06:50.673 [2024-12-09 06:05:45.181051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.181084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.191463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeff18 01:06:50.673 [2024-12-09 06:05:45.192638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.192686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.205804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef2d80 01:06:50.673 [2024-12-09 06:05:45.207671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.207707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.214396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef2d80 01:06:50.673 [2024-12-09 06:05:45.215300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.215335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.229184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eebb98 01:06:50.673 [2024-12-09 06:05:45.230590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.230628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.240585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef4f40 01:06:50.673 [2024-12-09 06:05:45.241820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.241858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:06:50.673 [2024-12-09 06:05:45.251999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ede8a8 01:06:50.673 [2024-12-09 06:05:45.253087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.673 [2024-12-09 06:05:45.253126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.263689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef20d8 01:06:50.932 [2024-12-09 06:05:45.264589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.264627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.275118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef6458 01:06:50.932 [2024-12-09 06:05:45.275895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.275934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.290222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efef90 01:06:50.932 [2024-12-09 06:05:45.292004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.292040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.299014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efbcf0 01:06:50.932 [2024-12-09 06:05:45.299939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.299975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.313510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eddc00 01:06:50.932 [2024-12-09 06:05:45.315134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.315173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.324759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016edf118 01:06:50.932 [2024-12-09 06:05:45.326149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.326190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.336567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee7818 01:06:50.932 [2024-12-09 06:05:45.337882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.337917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.351046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eedd58 01:06:50.932 [2024-12-09 06:05:45.353041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.353084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.359859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee23b8 01:06:50.932 [2024-12-09 06:05:45.360893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.360934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.374360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef7da8 01:06:50.932 [2024-12-09 06:05:45.376091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.376128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.385774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eea248 01:06:50.932 [2024-12-09 06:05:45.387241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.387285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.397632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef96f8 01:06:50.932 [2024-12-09 06:05:45.399064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.399106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.409970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef31b8 01:06:50.932 [2024-12-09 06:05:45.411389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.411431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.421512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef6890 01:06:50.932 [2024-12-09 06:05:45.422795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.932 [2024-12-09 06:05:45.422835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:06:50.932 [2024-12-09 06:05:45.432996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee1b48 01:06:50.933 [2024-12-09 06:05:45.434137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.933 [2024-12-09 06:05:45.434174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:06:50.933 [2024-12-09 06:05:45.444716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef4b08 01:06:50.933 [2024-12-09 06:05:45.445783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.933 [2024-12-09 06:05:45.445818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:06:50.933 [2024-12-09 06:05:45.459099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef96f8 01:06:50.933 [2024-12-09 06:05:45.460877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.933 [2024-12-09 06:05:45.460916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:06:50.933 [2024-12-09 06:05:45.467691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eea680 01:06:50.933 [2024-12-09 06:05:45.468446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.933 [2024-12-09 06:05:45.468478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:06:50.933 [2024-12-09 06:05:45.481480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef46d0 01:06:50.933 [2024-12-09 06:05:45.482682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.933 [2024-12-09 06:05:45.482715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:06:50.933 [2024-12-09 06:05:45.493463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eea248 01:06:50.933 [2024-12-09 06:05:45.494594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.933 [2024-12-09 06:05:45.494632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:06:50.933 [2024-12-09 06:05:45.505203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee0630 01:06:50.933 [2024-12-09 06:05:45.506357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:50.933 [2024-12-09 06:05:45.506392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.519876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeaab8 01:06:51.192 [2024-12-09 06:05:45.521790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.521829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.528600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016edf118 01:06:51.192 [2024-12-09 06:05:45.529474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.529508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.543130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef0350 01:06:51.192 [2024-12-09 06:05:45.544683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.544720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.554391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016edf988 01:06:51.192 [2024-12-09 06:05:45.555716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.555752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.566218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eebfd0 01:06:51.192 [2024-12-09 06:05:45.567470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.567508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.580776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef2d80 01:06:51.192 [2024-12-09 06:05:45.582715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.582755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.589396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eec408 01:06:51.192 [2024-12-09 06:05:45.590335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.590369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.603920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efeb58 01:06:51.192 [2024-12-09 06:05:45.605541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.605580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.615165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee4de8 01:06:51.192 [2024-12-09 06:05:45.616506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.616543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.626844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee84c0 01:06:51.192 [2024-12-09 06:05:45.628014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.628050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.641096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef81e0 01:06:51.192 [2024-12-09 06:05:45.643071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.643108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.649632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeaef0 01:06:51.192 [2024-12-09 06:05:45.650605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.650637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.664026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efc998 01:06:51.192 [2024-12-09 06:05:45.665695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.665740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.675201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee1b48 01:06:51.192 [2024-12-09 06:05:45.676604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.676639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.686907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eedd58 01:06:51.192 [2024-12-09 06:05:45.688265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.688297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:06:51.192 [2024-12-09 06:05:45.698033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef92c0 01:06:51.192 [2024-12-09 06:05:45.699158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.192 [2024-12-09 06:05:45.699191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:06:51.192 20883.00 IOPS, 81.57 MiB/s [2024-12-09T06:05:45.779Z] [2024-12-09 06:05:45.713508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef1430 01:06:51.193 [2024-12-09 06:05:45.715007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.193 [2024-12-09 06:05:45.715043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:06:51.193 [2024-12-09 06:05:45.725407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eed4e8 01:06:51.193 [2024-12-09 06:05:45.726835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.193 [2024-12-09 06:05:45.726883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:06:51.193 [2024-12-09 06:05:45.737772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee95a0 01:06:51.193 [2024-12-09 06:05:45.738714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.193 [2024-12-09 06:05:45.738747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:51.193 [2024-12-09 06:05:45.749610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efc998 01:06:51.193 [2024-12-09 06:05:45.750520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.193 [2024-12-09 06:05:45.750556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:51.193 [2024-12-09 06:05:45.761582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee12d8 01:06:51.193 [2024-12-09 06:05:45.762498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.193 [2024-12-09 06:05:45.762534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:51.193 [2024-12-09 06:05:45.773707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee95a0 01:06:51.193 [2024-12-09 06:05:45.774606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.193 [2024-12-09 06:05:45.774642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:51.450 [2024-12-09 06:05:45.787409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efc998 01:06:51.450 [2024-12-09 06:05:45.789155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.450 [2024-12-09 06:05:45.789189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:51.450 [2024-12-09 06:05:45.798814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee3d08 01:06:51.450 [2024-12-09 06:05:45.800393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.450 [2024-12-09 06:05:45.800425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:06:51.450 [2024-12-09 06:05:45.807537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef96f8 01:06:51.450 [2024-12-09 06:05:45.808280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.450 [2024-12-09 06:05:45.808312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:06:51.450 [2024-12-09 06:05:45.821843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efcdd0 01:06:51.450 [2024-12-09 06:05:45.823267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.450 [2024-12-09 06:05:45.823301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:06:51.450 [2024-12-09 06:05:45.832967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eee190 01:06:51.451 [2024-12-09 06:05:45.834122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.834156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.844671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eebfd0 01:06:51.451 [2024-12-09 06:05:45.845786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.845822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.858995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee6b70 01:06:51.451 [2024-12-09 06:05:45.860802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.860836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.867507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee7818 01:06:51.451 [2024-12-09 06:05:45.868328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.868365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.881854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee0a68 01:06:51.451 [2024-12-09 06:05:45.883359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.883394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.893576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee8088 01:06:51.451 [2024-12-09 06:05:45.894774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.894807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.905672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eea248 01:06:51.451 [2024-12-09 06:05:45.907185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.907218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.916843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee2c28 01:06:51.451 [2024-12-09 06:05:45.918106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.918142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.928529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eea248 01:06:51.451 [2024-12-09 06:05:45.929789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.929829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.942931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef0350 01:06:51.451 [2024-12-09 06:05:45.944850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.944892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.951424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eef270 01:06:51.451 [2024-12-09 06:05:45.952201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.952379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.966428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef31b8 01:06:51.451 [2024-12-09 06:05:45.968399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.968435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.977977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efa7d8 01:06:51.451 [2024-12-09 06:05:45.979731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.979770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:45.989536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeea00 01:06:51.451 [2024-12-09 06:05:45.991225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:45.991402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:46.001512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef0350 01:06:51.451 [2024-12-09 06:05:46.002822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:46.002856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:46.013434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef3e60 01:06:51.451 [2024-12-09 06:05:46.014948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:46.014985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:06:51.451 [2024-12-09 06:05:46.028138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efc560 01:06:51.451 [2024-12-09 06:05:46.030119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.451 [2024-12-09 06:05:46.030291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:06:51.709 [2024-12-09 06:05:46.037119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eed920 01:06:51.709 [2024-12-09 06:05:46.038317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.709 [2024-12-09 06:05:46.038353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:06:51.709 [2024-12-09 06:05:46.051808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee27f0 01:06:51.709 [2024-12-09 06:05:46.053714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.709 [2024-12-09 06:05:46.053754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:06:51.709 [2024-12-09 06:05:46.063323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efb480 01:06:51.709 [2024-12-09 06:05:46.064783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.709 [2024-12-09 06:05:46.064820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:06:51.709 [2024-12-09 06:05:46.075340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee7818 01:06:51.709 [2024-12-09 06:05:46.076736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.709 [2024-12-09 06:05:46.076912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:06:51.709 [2024-12-09 06:05:46.086953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef81e0 01:06:51.710 [2024-12-09 06:05:46.088062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.088231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.098722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efeb58 01:06:51.710 [2024-12-09 06:05:46.099810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.099854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.111196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef6458 01:06:51.710 [2024-12-09 06:05:46.112369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.112414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.123015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef7970 01:06:51.710 [2024-12-09 06:05:46.124107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.124150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.137520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efcdd0 01:06:51.710 [2024-12-09 06:05:46.139300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.139473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.146382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efb8b8 01:06:51.710 [2024-12-09 06:05:46.147179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.147220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.160963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eeb328 01:06:51.710 [2024-12-09 06:05:46.162660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.162716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.172496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eebfd0 01:06:51.710 [2024-12-09 06:05:46.173717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.173753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.184500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ede470 01:06:51.710 [2024-12-09 06:05:46.185892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.185927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.199306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee3060 01:06:51.710 [2024-12-09 06:05:46.201160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.201205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.207979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef8e88 01:06:51.710 [2024-12-09 06:05:46.209018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.209057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.222598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee5658 01:06:51.710 [2024-12-09 06:05:46.224156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.224196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.233822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef2d80 01:06:51.710 [2024-12-09 06:05:46.235099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.235138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.245513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef35f0 01:06:51.710 [2024-12-09 06:05:46.246774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.246818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.260196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eec408 01:06:51.710 [2024-12-09 06:05:46.262136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.262178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.268805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efe2e8 01:06:51.710 [2024-12-09 06:05:46.269738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.269775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:06:51.710 [2024-12-09 06:05:46.283237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee6fa8 01:06:51.710 [2024-12-09 06:05:46.285036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.710 [2024-12-09 06:05:46.285073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:06:51.968 [2024-12-09 06:05:46.294851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee4140 01:06:51.968 [2024-12-09 06:05:46.296117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.968 [2024-12-09 06:05:46.296156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:06:51.968 [2024-12-09 06:05:46.306625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee12d8 01:06:51.968 [2024-12-09 06:05:46.307972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.968 [2024-12-09 06:05:46.308011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:06:51.968 [2024-12-09 06:05:46.321056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee01f8 01:06:51.968 [2024-12-09 06:05:46.323074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.968 [2024-12-09 06:05:46.323113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.329668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee0630 01:06:51.969 [2024-12-09 06:05:46.330699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.330737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.344040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef6458 01:06:51.969 [2024-12-09 06:05:46.345745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.345787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.355222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef7970 01:06:51.969 [2024-12-09 06:05:46.356641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.356695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.366960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eea248 01:06:51.969 [2024-12-09 06:05:46.368365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.368535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.378378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efb8b8 01:06:51.969 [2024-12-09 06:05:46.379528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.379709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.390064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eee190 01:06:51.969 [2024-12-09 06:05:46.391175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.391218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.404600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eebfd0 01:06:51.969 [2024-12-09 06:05:46.406409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.406588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.413546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee4140 01:06:51.969 [2024-12-09 06:05:46.414574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.414608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.426030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee95a0 01:06:51.969 [2024-12-09 06:05:46.426866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.426909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.440289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee38d0 01:06:51.969 [2024-12-09 06:05:46.441291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.441482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.453744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef6890 01:06:51.969 [2024-12-09 06:05:46.455797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.456005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.462994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efb480 01:06:51.969 [2024-12-09 06:05:46.464317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.464517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.475339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee7c50 01:06:51.969 [2024-12-09 06:05:46.476451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.476658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.490642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efe2e8 01:06:51.969 [2024-12-09 06:05:46.492634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.492845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.502626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efcdd0 01:06:51.969 [2024-12-09 06:05:46.504091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.504285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.514907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef6458 01:06:51.969 [2024-12-09 06:05:46.516492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.516705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.526780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee4140 01:06:51.969 [2024-12-09 06:05:46.527930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.528104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:06:51.969 [2024-12-09 06:05:46.538880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efb480 01:06:51.969 [2024-12-09 06:05:46.539812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:51.969 [2024-12-09 06:05:46.539855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.554263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef8a50 01:06:52.228 [2024-12-09 06:05:46.556252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.556298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.565844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee6300 01:06:52.228 [2024-12-09 06:05:46.567578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.567759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.574733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efa3a0 01:06:52.228 [2024-12-09 06:05:46.575664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.575704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.589261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efeb58 01:06:52.228 [2024-12-09 06:05:46.591128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.591298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.601027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016efc998 01:06:52.228 [2024-12-09 06:05:46.602407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.602569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.612861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef6458 01:06:52.228 [2024-12-09 06:05:46.614169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.614212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.627408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eef270 01:06:52.228 [2024-12-09 06:05:46.629408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.629575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.636268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ee3498 01:06:52.228 [2024-12-09 06:05:46.637286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.637327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.651769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eec840 01:06:52.228 [2024-12-09 06:05:46.653769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.653813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.660500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef1430 01:06:52.228 [2024-12-09 06:05:46.661521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.661561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.675114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef8a50 01:06:52.228 [2024-12-09 06:05:46.676903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.676939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.686844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016eed920 01:06:52.228 [2024-12-09 06:05:46.688198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.688369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:06:52.228 [2024-12-09 06:05:46.698242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2191570) with pdu=0x200016ef6458 01:06:52.228 [2024-12-09 06:05:46.699615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:06:52.228 [2024-12-09 06:05:46.699789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:52.228 20918.50 IOPS, 81.71 MiB/s 01:06:52.228 Latency(us) 01:06:52.228 [2024-12-09T06:05:46.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:52.228 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:06:52.228 nvme0n1 : 2.00 20946.01 81.82 0.00 0.00 6103.01 2532.07 17396.83 01:06:52.228 [2024-12-09T06:05:46.814Z] =================================================================================================================== 01:06:52.228 [2024-12-09T06:05:46.814Z] Total : 20946.01 81.82 0.00 0.00 6103.01 2532.07 17396.83 01:06:52.228 { 01:06:52.228 "results": [ 01:06:52.228 { 01:06:52.228 "job": "nvme0n1", 01:06:52.228 "core_mask": "0x2", 01:06:52.228 "workload": "randwrite", 01:06:52.228 "status": "finished", 01:06:52.228 "queue_depth": 128, 01:06:52.228 "io_size": 4096, 01:06:52.228 "runtime": 2.003484, 01:06:52.228 "iops": 20946.0120470141, 01:06:52.228 "mibps": 81.82035955864883, 01:06:52.228 "io_failed": 0, 01:06:52.228 "io_timeout": 0, 01:06:52.228 "avg_latency_us": 6103.005838198499, 01:06:52.228 "min_latency_us": 2532.072727272727, 01:06:52.228 "max_latency_us": 17396.82909090909 01:06:52.228 } 01:06:52.228 ], 01:06:52.228 "core_count": 1 01:06:52.228 } 01:06:52.228 06:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:06:52.228 06:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:06:52.228 | .driver_specific 01:06:52.228 | .nvme_error 01:06:52.229 | .status_code 01:06:52.229 | .command_transient_transport_error' 01:06:52.229 06:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:06:52.229 06:05:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:06:52.487 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 01:06:52.487 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93734 01:06:52.487 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93734 ']' 01:06:52.487 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93734 01:06:52.487 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:06:52.487 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:52.487 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93734 01:06:52.746 killing process with pid 93734 01:06:52.746 Received shutdown signal, test time was about 2.000000 seconds 01:06:52.746 01:06:52.746 Latency(us) 01:06:52.746 [2024-12-09T06:05:47.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:52.746 [2024-12-09T06:05:47.332Z] =================================================================================================================== 01:06:52.746 [2024-12-09T06:05:47.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93734' 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93734 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93734 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93811 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93811 /var/tmp/bperf.sock 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93811 ']' 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:06:52.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:52.746 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:52.746 [2024-12-09 06:05:47.268737] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:52.746 [2024-12-09 06:05:47.269087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93811 ] 01:06:52.746 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:52.746 Zero copy mechanism will not be used. 01:06:53.004 [2024-12-09 06:05:47.413153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:53.004 [2024-12-09 06:05:47.446318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:53.004 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:53.004 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 01:06:53.004 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:53.004 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 01:06:53.261 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 01:06:53.261 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:53.261 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:53.261 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:53.261 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:53.261 06:05:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 01:06:53.828 nvme0n1 01:06:53.828 06:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 01:06:53.828 06:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:53.828 06:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:53.828 06:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:53.828 06:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 01:06:53.828 06:05:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:06:53.828 I/O size of 131072 is greater than zero copy threshold (65536). 01:06:53.828 Zero copy mechanism will not be used. 01:06:53.828 Running I/O for 2 seconds... 01:06:53.828 [2024-12-09 06:05:48.295164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.828 [2024-12-09 06:05:48.295285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.828 [2024-12-09 06:05:48.295316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:53.828 [2024-12-09 06:05:48.300452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.828 [2024-12-09 06:05:48.300565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.828 [2024-12-09 06:05:48.300591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:53.828 [2024-12-09 06:05:48.305422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.828 [2024-12-09 06:05:48.305515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.828 [2024-12-09 06:05:48.305539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:53.828 [2024-12-09 06:05:48.310438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.828 [2024-12-09 06:05:48.310523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.828 [2024-12-09 06:05:48.310548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:53.828 [2024-12-09 06:05:48.315574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.828 [2024-12-09 06:05:48.315694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.828 [2024-12-09 06:05:48.315718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:53.828 [2024-12-09 06:05:48.320790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.828 [2024-12-09 06:05:48.320900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.828 [2024-12-09 06:05:48.320925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:53.828 [2024-12-09 06:05:48.325742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.828 [2024-12-09 06:05:48.325835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.828 [2024-12-09 06:05:48.325860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:53.828 [2024-12-09 06:05:48.330725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.330836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.330861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.335716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.335856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.335882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.340722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.340837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.340865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.345660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.345746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.345772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.350565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.350680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.350707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.355546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.355640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.355680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.360517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.360607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.360631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.365435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.365523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.365548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.370419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.370504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.370530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.375434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.375549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.375576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.380391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.380485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.380510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.385367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.385458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.385483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.390428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.390518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.390542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.395430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.395511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.395534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.400409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.400515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.400539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.405434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.405524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.405548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:53.829 [2024-12-09 06:05:48.410594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:53.829 [2024-12-09 06:05:48.410778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:53.829 [2024-12-09 06:05:48.410806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.089 [2024-12-09 06:05:48.415780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.089 [2024-12-09 06:05:48.415905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.089 [2024-12-09 06:05:48.415931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.089 [2024-12-09 06:05:48.420945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.089 [2024-12-09 06:05:48.421065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.089 [2024-12-09 06:05:48.421091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.089 [2024-12-09 06:05:48.426043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.089 [2024-12-09 06:05:48.426148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.089 [2024-12-09 06:05:48.426174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.089 [2024-12-09 06:05:48.431053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.089 [2024-12-09 06:05:48.431160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.089 [2024-12-09 06:05:48.431185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.089 [2024-12-09 06:05:48.436041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.089 [2024-12-09 06:05:48.436152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.089 [2024-12-09 06:05:48.436179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.089 [2024-12-09 06:05:48.441040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.089 [2024-12-09 06:05:48.441119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.089 [2024-12-09 06:05:48.441141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.089 [2024-12-09 06:05:48.446001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.446086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.446108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.451018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.451099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.451129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.455956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.456062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.456086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.460953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.461066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.461090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.465881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.465972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.465997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.470922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.471036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.471062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.475967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.476081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.476107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.481005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.481124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.481150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.486049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.486159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.486185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.491121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.491227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.491253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.496132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.496247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.496274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.501114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.501197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.501222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.506064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.506146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.506170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.511086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.511192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.511217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.516121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.516230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.516258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.521082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.521197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.521224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.526046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.526157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.526184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.531009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.531086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.531108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.535973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.536051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.536074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.540872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.540953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.540976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.545759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.545843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.545865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.550638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.550750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.550773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.555770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.555883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.555906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.560872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.560984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.561007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.565894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.565978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.566001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.570914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.571007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.090 [2024-12-09 06:05:48.571030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.090 [2024-12-09 06:05:48.576247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.090 [2024-12-09 06:05:48.576353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.576376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.581406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.581497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.581521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.586407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.586510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.586533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.591344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.591428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.591451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.596222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.596306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.596329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.601184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.601288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.601311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.606119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.606210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.606234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.611117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.611224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.611248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.616100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.616186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.616208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.621156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.621244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.621266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.626058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.626137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.626160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.631026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.631109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.631133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.635924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.636005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.636029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.640883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.640986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.641011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.645928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.646026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.646050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.650891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.651009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.651034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.655830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.655924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.655947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.660799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.660884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.660909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.665811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.665925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.665952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.091 [2024-12-09 06:05:48.670948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.091 [2024-12-09 06:05:48.671053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.091 [2024-12-09 06:05:48.671084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.676115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.676205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.676230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.681249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.681366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.681392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.686278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.686364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.686389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.691282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.691394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.691419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.696366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.696484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.696509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.701364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.701454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.701479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.706327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.706409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.706433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.711398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.711506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.711529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.716438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.716527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.716550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.721444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.721528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.721551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.726367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.726453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.726477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.731472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.731576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.731599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.736536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.736636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.736658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.741571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.741673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.741697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.746484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.746572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.746597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.751478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.751571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.357 [2024-12-09 06:05:48.751595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.357 [2024-12-09 06:05:48.756492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.357 [2024-12-09 06:05:48.756591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.756615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.761566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.761648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.761687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.766528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.766634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.766657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.771463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.771565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.771588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.776495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.776591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.776614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.781529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.781635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.781659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.786491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.786588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.786611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.791466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.791550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.791573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.796476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.796599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.796622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.801516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.801600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.801623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.806432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.806541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.806564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.811394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.811483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.811507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.816460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.816563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.816587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.821521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.821620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.821643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.826505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.826594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.826617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.831722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.831862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.831884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.837139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.837256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.837278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.842214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.842319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.842342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.847326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.847424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.847447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.852386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.852459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.852482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.857620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.857797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.857820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.862819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.862917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.862939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.867726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.867848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.867870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.872742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.872865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.872887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.877633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.877762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.877785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.882549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.882713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.882737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.887411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.358 [2024-12-09 06:05:48.887542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.358 [2024-12-09 06:05:48.887565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.358 [2024-12-09 06:05:48.892407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.892539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.892561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.897610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.897720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.897757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.902402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.902518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.902540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.907348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.907465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.907486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.912354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.912471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.912493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.917267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.917386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.917408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.922295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.922407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.922429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.927201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.927310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.927332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.932005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.932117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.932147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.359 [2024-12-09 06:05:48.937089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.359 [2024-12-09 06:05:48.937194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.359 [2024-12-09 06:05:48.937215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.617 [2024-12-09 06:05:48.942355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.617 [2024-12-09 06:05:48.942461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.617 [2024-12-09 06:05:48.942482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.617 [2024-12-09 06:05:48.947561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.617 [2024-12-09 06:05:48.947716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.617 [2024-12-09 06:05:48.947753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.952491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.952602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.952623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.957379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.957497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.957519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.962341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.962458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.962480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.967307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.967422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.967443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.972034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.972158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.972179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.976874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.976983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.977005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.981651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.981781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.981803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.986425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.986544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.986565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.991393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.991501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.991538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:48.996235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:48.996351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:48.996373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.001030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.001140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.001162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.005782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.005897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.005919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.010540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.010659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.010703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.015358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.015475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.015497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.020176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.020290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.020312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.024968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.025078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.025099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.029656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.029773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.029795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.034433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.034549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.034571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.039178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.039295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.039316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.044011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.044124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.044146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.048856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.048967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.048989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.053590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.053724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.053746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.058397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.058532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.058554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.063278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.063396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.063418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.068177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.068292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.068314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.073061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.073171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.618 [2024-12-09 06:05:49.073192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.618 [2024-12-09 06:05:49.077935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.618 [2024-12-09 06:05:49.078048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.078070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.082774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.082860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.082883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.087550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.087679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.087702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.093044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.093147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.093168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.098194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.098296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.098319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.103128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.103245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.103269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.108016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.108121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.108143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.113552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.113685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.113707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.118563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.118775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.118798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.123592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.123749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.123786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.128749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.128882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.128904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.133841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.133954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.133976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.138604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.138759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.138782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.143420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.143535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.143558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.148231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.148349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.148370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.153128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.153230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.153252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.157930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.158040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.158062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.162730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.162833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.162856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.167476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.167596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.167618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.172274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.172386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.172408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.177152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.177267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.177289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.181973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.182081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.182103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.186778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.186878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.186900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.191529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.191645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.191668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.619 [2024-12-09 06:05:49.196581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.619 [2024-12-09 06:05:49.196697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.619 [2024-12-09 06:05:49.196733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.878 [2024-12-09 06:05:49.202054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.878 [2024-12-09 06:05:49.202145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.878 [2024-12-09 06:05:49.202168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.878 [2024-12-09 06:05:49.207139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.878 [2024-12-09 06:05:49.207228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.878 [2024-12-09 06:05:49.207251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.878 [2024-12-09 06:05:49.212117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.878 [2024-12-09 06:05:49.212254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.878 [2024-12-09 06:05:49.212277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.878 [2024-12-09 06:05:49.217271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.878 [2024-12-09 06:05:49.217359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.878 [2024-12-09 06:05:49.217382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.878 [2024-12-09 06:05:49.223006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.878 [2024-12-09 06:05:49.223139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.878 [2024-12-09 06:05:49.223161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.228245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.228395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.228417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.233355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.233472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.233494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.238448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.238566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.238588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.243546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.243638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.243663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.248579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.248676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.248700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.253589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.253698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.253733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.258454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.258587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.258609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.263334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.263450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.263471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.268240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.268357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.268379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.273034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.273149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.273171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.277880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.277996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.278018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.282957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.283043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.283066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.288064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.288162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.288185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.293163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.293275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.293314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.879 6204.00 IOPS, 775.50 MiB/s [2024-12-09T06:05:49.465Z] [2024-12-09 06:05:49.299552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.299649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.299673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.304588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.304724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.304747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.309493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.309612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.309652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.314232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.314365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.314387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.319142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.319263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.319287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.324130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.324246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.324269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.329041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.329151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.329174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.333935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.334051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.334073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.339000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.339114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.339137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.344019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.344143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.344165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.348757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.348879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.348901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.353572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.879 [2024-12-09 06:05:49.353708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.879 [2024-12-09 06:05:49.353731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.879 [2024-12-09 06:05:49.358581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.358722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.358745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.363723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.363841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.363863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.368885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.368988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.369009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.374083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.374188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.374211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.379244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.379374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.379397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.384268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.384385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.389101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.389216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.389239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.394137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.394237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.394259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.399956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.400055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.400077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.405167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.405316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.405338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.410313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.410411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.410433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.415455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.415573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.415596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.420409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.420527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.420549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.425295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.425442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.425464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.430291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.430410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.430439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.435135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.435246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.435268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.439980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.440090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.440113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.444733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.444850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.444872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.449591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.449718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.449742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.454476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.454593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.454617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:54.880 [2024-12-09 06:05:49.459560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:54.880 [2024-12-09 06:05:49.459668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:54.880 [2024-12-09 06:05:49.459704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.464874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.465012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.465037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.470093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.470238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.470261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.475017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.475152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.475176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.479956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.480084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.480110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.484908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.485051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.485079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.489910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.490045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.490070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.494965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.495046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.495084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.500091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.500216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.500254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.505192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.139 [2024-12-09 06:05:49.505309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.139 [2024-12-09 06:05:49.505333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.139 [2024-12-09 06:05:49.510334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.510434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.510458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.515374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.515471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.515498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.520534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.520621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.520645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.525645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.525742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.525772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.530772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.530884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.530908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.535915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.536027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.536049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.541095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.541204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.541227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.546267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.546419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.546458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.551474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.551588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.551611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.556525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.556643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.556665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.561505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.561647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.561670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.566452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.566587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.566610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.571434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.571561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.571585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.576288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.576417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.576440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.581163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.581279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.581301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.586077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.586201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.586238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.591254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.591384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.591406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.596492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.596569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.596592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.601596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.601725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.601745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.606853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.606956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.606978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.612075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.612156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.612179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.617163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.617251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.617273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.622414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.622539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.622561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.627267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.627380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.627400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.632192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.632306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.632327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.140 [2024-12-09 06:05:49.636807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.140 [2024-12-09 06:05:49.636952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.140 [2024-12-09 06:05:49.636973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.641335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.641454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.641474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.645900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.646004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.646024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.650376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.650496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.650516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.655615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.655764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.655786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.660667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.660780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.660800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.665365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.665476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.665497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.670057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.670167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.670187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.674574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.674741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.674764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.679492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.679615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.679638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.684695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.684892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.684914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.690121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.690254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.690276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.695232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.695380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.695403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.700674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.700827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.700850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.705931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.706039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.706061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.711186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.711299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.711321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.716270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.716386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.716408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.141 [2024-12-09 06:05:49.721925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.141 [2024-12-09 06:05:49.722023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.141 [2024-12-09 06:05:49.722057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.727191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.727300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.727322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.732464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.732572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.732595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.737727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.737898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.742917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.743031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.743067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.748028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.748140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.748161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.753116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.753224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.753246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.758239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.758371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.758393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.763463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.763568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.763590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.768616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.768774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.768795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.773708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.773844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.773865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.778874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.778998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.779019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.784217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.784344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.784381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.789354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.789446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.789468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.794894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.400 [2024-12-09 06:05:49.794975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.400 [2024-12-09 06:05:49.795011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.400 [2024-12-09 06:05:49.799844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.799970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.799991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.804814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.804924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.804945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.809739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.809838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.809859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.814255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.814391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.814411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.819182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.819305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.819326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.823915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.824008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.824029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.828638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.828768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.828790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.833376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.833491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.833511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.838075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.838196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.838218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.842960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.843070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.843106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.847513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.847625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.847645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.852170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.852276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.852297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.856735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.856856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.856894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.861822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.861965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.861987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.866852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.866942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.866964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.872079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.872197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.872219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.877264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.877373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.877395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.882322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.882414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.882436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.887395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.887484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.887506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.892622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.892744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.892767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.897698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.897850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.897872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.902883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.902984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.903006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.908066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.908191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.908213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.913624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.913747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.913784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.919115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.919250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.924358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.924462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.924484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.929639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.401 [2024-12-09 06:05:49.929788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.401 [2024-12-09 06:05:49.929810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.401 [2024-12-09 06:05:49.934738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.934827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.934850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.939835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.939941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.939962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.944835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.944920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.944943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.950043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.950144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.950166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.954986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.955061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.955084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.959926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.960009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.960032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.964822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.964912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.964935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.969760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.969844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.969866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.974680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.974772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.974794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.402 [2024-12-09 06:05:49.979577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.402 [2024-12-09 06:05:49.979678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.402 [2024-12-09 06:05:49.979700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:49.984839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.661 [2024-12-09 06:05:49.984945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.661 [2024-12-09 06:05:49.984967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:49.989787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.661 [2024-12-09 06:05:49.989896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.661 [2024-12-09 06:05:49.989918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:49.994758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.661 [2024-12-09 06:05:49.994840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.661 [2024-12-09 06:05:49.994863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:49.999677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.661 [2024-12-09 06:05:49.999770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.661 [2024-12-09 06:05:49.999792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:50.004830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.661 [2024-12-09 06:05:50.004911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.661 [2024-12-09 06:05:50.004935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:50.009718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.661 [2024-12-09 06:05:50.009807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.661 [2024-12-09 06:05:50.009830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:50.014718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.661 [2024-12-09 06:05:50.014797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.661 [2024-12-09 06:05:50.014819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:50.019820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.661 [2024-12-09 06:05:50.019933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.661 [2024-12-09 06:05:50.019963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.661 [2024-12-09 06:05:50.024701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.024795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.024820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.029597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.029704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.029729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.034612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.034751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.034793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.039683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.039786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.039808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.044773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.044895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.044919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.049962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.050088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.050109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.055222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.055345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.055394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.060596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.060759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.060792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.065633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.065827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.065848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.070590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.070747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.070769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.075554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.075664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.075686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.080498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.080617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.080638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.085380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.085495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.085516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.090092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.090204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.090224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.094873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.094959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.094982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.099494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.099619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.099639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.104162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.104267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.104287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.108722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.108841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.108861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.113348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.113459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.113478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.118287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.118397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.118417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.123257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.123413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.123434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.128673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.128806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.128826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.133573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.133702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.133737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.138467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.138583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.138604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.143373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.143506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.143529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.148267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.148393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.148414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.153036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.662 [2024-12-09 06:05:50.153138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.662 [2024-12-09 06:05:50.153158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.662 [2024-12-09 06:05:50.157756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.157880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.157900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.162383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.162496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.162516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.167145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.167249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.167269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.172083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.172191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.172211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.177019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.177118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.177138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.181545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.181654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.181686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.186076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.186180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.186200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.190511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.190623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.190643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.195226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.195338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.195359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.199828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.199933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.199954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.204596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.204753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.204773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.209515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.209630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.209650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.214315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.214426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.214447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.219315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.219425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.219446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.224356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.224486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.224522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.229633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.229762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.229784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.234969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.235094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.235116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.663 [2024-12-09 06:05:50.240284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.663 [2024-12-09 06:05:50.240415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.663 [2024-12-09 06:05:50.240436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.245919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.245997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.246019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.251191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.251287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.251308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.256250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.256345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.256365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.261387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.261471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.261492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.266477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.266585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.266606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.271469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.271578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.271598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.276442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.276553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.276573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.281229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.281324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.281345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.285993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.286107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.921 [2024-12-09 06:05:50.286160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:06:55.921 [2024-12-09 06:05:50.290874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.921 [2024-12-09 06:05:50.290969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.922 [2024-12-09 06:05:50.291007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:06:55.922 6193.50 IOPS, 774.19 MiB/s [2024-12-09T06:05:50.508Z] [2024-12-09 06:05:50.296710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x21918b0) with pdu=0x200016eff3c8 01:06:55.922 [2024-12-09 06:05:50.296835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:06:55.922 [2024-12-09 06:05:50.296856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:06:55.922 01:06:55.922 Latency(us) 01:06:55.922 [2024-12-09T06:05:50.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:55.922 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 01:06:55.922 nvme0n1 : 2.00 6191.77 773.97 0.00 0.00 2577.66 1549.03 6404.65 01:06:55.922 [2024-12-09T06:05:50.508Z] =================================================================================================================== 01:06:55.922 [2024-12-09T06:05:50.508Z] Total : 6191.77 773.97 0.00 0.00 2577.66 1549.03 6404.65 01:06:55.922 { 01:06:55.922 "results": [ 01:06:55.922 { 01:06:55.922 "job": "nvme0n1", 01:06:55.922 "core_mask": "0x2", 01:06:55.922 "workload": "randwrite", 01:06:55.922 "status": "finished", 01:06:55.922 "queue_depth": 16, 01:06:55.922 "io_size": 131072, 01:06:55.922 "runtime": 2.004272, 01:06:55.922 "iops": 6191.774369945796, 01:06:55.922 "mibps": 773.9717962432245, 01:06:55.922 "io_failed": 0, 01:06:55.922 "io_timeout": 0, 01:06:55.922 "avg_latency_us": 2577.6604181378652, 01:06:55.922 "min_latency_us": 1549.0327272727272, 01:06:55.922 "max_latency_us": 6404.654545454546 01:06:55.922 } 01:06:55.922 ], 01:06:55.922 "core_count": 1 01:06:55.922 } 01:06:55.922 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 01:06:55.922 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 01:06:55.922 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 01:06:55.922 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 01:06:55.922 | .driver_specific 01:06:55.922 | .nvme_error 01:06:55.922 | .status_code 01:06:55.922 | .command_transient_transport_error' 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 401 > 0 )) 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93811 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93811 ']' 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93811 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93811 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:06:56.180 killing process with pid 93811 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93811' 01:06:56.180 Received shutdown signal, test time was about 2.000000 seconds 01:06:56.180 01:06:56.180 Latency(us) 01:06:56.180 [2024-12-09T06:05:50.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:06:56.180 [2024-12-09T06:05:50.766Z] =================================================================================================================== 01:06:56.180 [2024-12-09T06:05:50.766Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93811 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93811 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93555 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93555 ']' 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93555 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 01:06:56.180 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93555 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:06:56.438 killing process with pid 93555 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93555' 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93555 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93555 01:06:56.438 01:06:56.438 real 0m15.290s 01:06:56.438 user 0m30.115s 01:06:56.438 sys 0m4.040s 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 01:06:56.438 ************************************ 01:06:56.438 END TEST nvmf_digest_error 01:06:56.438 ************************************ 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 01:06:56.438 06:05:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:06:56.696 rmmod nvme_tcp 01:06:56.696 rmmod nvme_fabrics 01:06:56.696 rmmod nvme_keyring 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 93555 ']' 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 93555 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 93555 ']' 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 93555 01:06:56.696 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (93555) - No such process 01:06:56.696 Process with pid 93555 is not found 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 93555 is not found' 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 01:06:56.696 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:56.697 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 01:06:56.954 01:06:56.954 real 0m31.682s 01:06:56.954 user 1m0.545s 01:06:56.954 sys 0m8.420s 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:06:56.954 ************************************ 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 01:06:56.954 END TEST nvmf_digest 01:06:56.954 ************************************ 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 01:06:56.954 06:05:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 01:06:56.955 06:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:06:56.955 06:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:06:56.955 06:05:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:06:56.955 ************************************ 01:06:56.955 START TEST nvmf_mdns_discovery 01:06:56.955 ************************************ 01:06:56.955 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 01:06:56.955 * Looking for test storage... 01:06:56.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:06:56.955 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:06:56.955 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 01:06:56.955 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 01:06:57.212 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:06:57.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:57.213 --rc genhtml_branch_coverage=1 01:06:57.213 --rc genhtml_function_coverage=1 01:06:57.213 --rc genhtml_legend=1 01:06:57.213 --rc geninfo_all_blocks=1 01:06:57.213 --rc geninfo_unexecuted_blocks=1 01:06:57.213 01:06:57.213 ' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:06:57.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:57.213 --rc genhtml_branch_coverage=1 01:06:57.213 --rc genhtml_function_coverage=1 01:06:57.213 --rc genhtml_legend=1 01:06:57.213 --rc geninfo_all_blocks=1 01:06:57.213 --rc geninfo_unexecuted_blocks=1 01:06:57.213 01:06:57.213 ' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:06:57.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:57.213 --rc genhtml_branch_coverage=1 01:06:57.213 --rc genhtml_function_coverage=1 01:06:57.213 --rc genhtml_legend=1 01:06:57.213 --rc geninfo_all_blocks=1 01:06:57.213 --rc geninfo_unexecuted_blocks=1 01:06:57.213 01:06:57.213 ' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:06:57.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:06:57.213 --rc genhtml_branch_coverage=1 01:06:57.213 --rc genhtml_function_coverage=1 01:06:57.213 --rc genhtml_legend=1 01:06:57.213 --rc geninfo_all_blocks=1 01:06:57.213 --rc geninfo_unexecuted_blocks=1 01:06:57.213 01:06:57.213 ' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:06:57.213 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 01:06:57.213 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:06:57.214 Cannot find device "nvmf_init_br" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:06:57.214 Cannot find device "nvmf_init_br2" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:06:57.214 Cannot find device "nvmf_tgt_br" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:06:57.214 Cannot find device "nvmf_tgt_br2" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:06:57.214 Cannot find device "nvmf_init_br" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:06:57.214 Cannot find device "nvmf_init_br2" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:06:57.214 Cannot find device "nvmf_tgt_br" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:06:57.214 Cannot find device "nvmf_tgt_br2" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:06:57.214 Cannot find device "nvmf_br" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:06:57.214 Cannot find device "nvmf_init_if" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:06:57.214 Cannot find device "nvmf_init_if2" 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:06:57.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:06:57.214 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:06:57.214 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:06:57.472 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:06:57.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:06:57.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 01:06:57.473 01:06:57.473 --- 10.0.0.3 ping statistics --- 01:06:57.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:57.473 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:06:57.473 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:06:57.473 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 01:06:57.473 01:06:57.473 --- 10.0.0.4 ping statistics --- 01:06:57.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:57.473 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:06:57.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:06:57.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 01:06:57.473 01:06:57.473 --- 10.0.0.1 ping statistics --- 01:06:57.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:57.473 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:06:57.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:06:57.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 01:06:57.473 01:06:57.473 --- 10.0.0.2 ping statistics --- 01:06:57.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:06:57.473 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=94148 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 94148 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94148 ']' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:57.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:57.473 06:05:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.730 [2024-12-09 06:05:52.061337] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:57.730 [2024-12-09 06:05:52.061440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:06:57.731 [2024-12-09 06:05:52.218823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:57.731 [2024-12-09 06:05:52.257110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:06:57.731 [2024-12-09 06:05:52.257186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:06:57.731 [2024-12-09 06:05:52.257200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:06:57.731 [2024-12-09 06:05:52.257210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:06:57.731 [2024-12-09 06:05:52.257218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:06:57.731 [2024-12-09 06:05:52.257585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 [2024-12-09 06:05:52.456769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 [2024-12-09 06:05:52.468924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 null0 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 null1 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 null2 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 null3 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94190 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94190 /tmp/host.sock 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94190 ']' 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 01:06:57.988 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 01:06:57.988 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:58.246 [2024-12-09 06:05:52.585847] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:06:58.246 [2024-12-09 06:05:52.585946] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94190 ] 01:06:58.246 [2024-12-09 06:05:52.736457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:06:58.246 [2024-12-09 06:05:52.771824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:06:58.503 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:06:58.503 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 01:06:58.504 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 01:06:58.504 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 01:06:58.504 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 01:06:58.504 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94201 01:06:58.504 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 01:06:58.504 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 01:06:58.504 06:05:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 01:06:58.504 Process 1067 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 01:06:58.504 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 01:06:58.504 Successfully dropped root privileges. 01:06:58.504 avahi-daemon 0.8 starting up. 01:06:58.504 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 01:06:58.504 Successfully called chroot(). 01:06:58.504 Successfully dropped remaining capabilities. 01:06:59.439 No service file found in /etc/avahi/services. 01:06:59.439 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 01:06:59.439 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 01:06:59.439 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 01:06:59.439 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 01:06:59.439 Network interface enumeration completed. 01:06:59.439 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 01:06:59.439 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 01:06:59.439 Registering new address record for fe80::c451:9cff:fe46:6c53 on nvmf_tgt_if.*. 01:06:59.439 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 01:06:59.439 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1009041656. 01:06:59.439 06:05:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 01:06:59.439 06:05:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.439 06:05:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.439 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.439 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:06:59.439 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.439 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.439 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.439 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 01:06:59.439 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 01:06:59.439 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:06:59.440 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.440 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.440 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:06:59.440 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:06:59.440 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:06:59.699 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:06:59.959 [2024-12-09 06:05:54.315545] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.959 [2024-12-09 06:05:54.373550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:06:59.959 06:05:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 01:07:00.896 [2024-12-09 06:05:55.215535] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 01:07:01.155 [2024-12-09 06:05:55.615565] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:07:01.155 [2024-12-09 06:05:55.615606] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 01:07:01.155 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:01.155 cookie is 0 01:07:01.155 is_local: 1 01:07:01.155 our_own: 0 01:07:01.155 wide_area: 0 01:07:01.155 multicast: 1 01:07:01.155 cached: 1 01:07:01.155 [2024-12-09 06:05:55.715535] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:07:01.155 [2024-12-09 06:05:55.715565] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 01:07:01.155 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:01.155 cookie is 0 01:07:01.155 is_local: 1 01:07:01.155 our_own: 0 01:07:01.155 wide_area: 0 01:07:01.155 multicast: 1 01:07:01.155 cached: 1 01:07:02.088 [2024-12-09 06:05:56.616457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:02.088 [2024-12-09 06:05:56.616528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15f8850 with addr=10.0.0.4, port=8009 01:07:02.088 [2024-12-09 06:05:56.616565] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:07:02.088 [2024-12-09 06:05:56.616581] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:07:02.088 [2024-12-09 06:05:56.616590] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 01:07:02.346 [2024-12-09 06:05:56.724214] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:07:02.346 [2024-12-09 06:05:56.724257] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:07:02.346 [2024-12-09 06:05:56.724276] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:02.346 [2024-12-09 06:05:56.810345] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 01:07:02.346 [2024-12-09 06:05:56.864783] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 01:07:02.346 [2024-12-09 06:05:56.865550] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x162d7d0:1 started. 01:07:02.346 [2024-12-09 06:05:56.867314] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 01:07:02.346 [2024-12-09 06:05:56.867340] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:07:02.346 [2024-12-09 06:05:56.872584] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x162d7d0 was disconnected and freed. delete nvme_qpair. 01:07:03.280 [2024-12-09 06:05:57.616398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:03.281 [2024-12-09 06:05:57.616466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16163a0 with addr=10.0.0.4, port=8009 01:07:03.281 [2024-12-09 06:05:57.616503] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:07:03.281 [2024-12-09 06:05:57.616513] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:07:03.281 [2024-12-09 06:05:57.616523] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 01:07:04.215 [2024-12-09 06:05:58.616369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:04.215 [2024-12-09 06:05:58.616438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616580 with addr=10.0.0.4, port=8009 01:07:04.215 [2024-12-09 06:05:58.616458] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 01:07:04.215 [2024-12-09 06:05:58.616467] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:07:04.215 [2024-12-09 06:05:58.616476] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:07:05.149 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:07:05.149 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:05.149 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:05.149 [2024-12-09 06:05:59.460080] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 01:07:05.149 [2024-12-09 06:05:59.462414] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:05.149 [2024-12-09 06:05:59.462453] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:05.149 [2024-12-09 06:05:59.467970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 01:07:05.149 [2024-12-09 06:05:59.468432] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:05.149 06:05:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 01:07:05.149 [2024-12-09 06:05:59.599536] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:07:05.149 [2024-12-09 06:05:59.599587] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:05.149 [2024-12-09 06:05:59.624811] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 01:07:05.149 [2024-12-09 06:05:59.624836] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 01:07:05.149 [2024-12-09 06:05:59.624869] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:07:05.149 [2024-12-09 06:05:59.684959] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:07:05.149 [2024-12-09 06:05:59.710930] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 01:07:05.407 [2024-12-09 06:05:59.765357] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 01:07:05.407 [2024-12-09 06:05:59.766005] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x162a8a0:1 started. 01:07:05.407 [2024-12-09 06:05:59.767560] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 01:07:05.408 [2024-12-09 06:05:59.767604] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 01:07:05.408 [2024-12-09 06:05:59.773450] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x162a8a0 was disconnected and freed. delete nvme_qpair. 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 01:07:06.057 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:07:06.057 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 01:07:06.057 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:07:06.057 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:06.057 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:06.057 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:06.057 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:07:06.057 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.058 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 01:07:06.316 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.317 [2024-12-09 06:06:00.897070] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16172d0:1 started. 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:06.317 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:06.576 [2024-12-09 06:06:00.903755] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16172d0 was disconnected and freed. delete nvme_qpair. 01:07:06.576 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:06.576 06:06:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 01:07:06.576 [2024-12-09 06:06:00.910417] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x162bf40:1 started. 01:07:06.576 [2024-12-09 06:06:00.913626] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x162bf40 was disconnected and freed. delete nvme_qpair. 01:07:06.576 [2024-12-09 06:06:00.915572] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:07:06.576 [2024-12-09 06:06:00.915595] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 01:07:06.576 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:06.576 cookie is 0 01:07:06.576 is_local: 1 01:07:06.576 our_own: 0 01:07:06.576 wide_area: 0 01:07:06.576 multicast: 1 01:07:06.576 cached: 1 01:07:06.576 [2024-12-09 06:06:00.915624] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 01:07:06.576 [2024-12-09 06:06:01.015567] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:07:06.576 [2024-12-09 06:06:01.015593] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 01:07:06.576 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:06.576 cookie is 0 01:07:06.576 is_local: 1 01:07:06.576 our_own: 0 01:07:06.576 wide_area: 0 01:07:06.576 multicast: 1 01:07:06.576 cached: 1 01:07:06.576 [2024-12-09 06:06:01.015618] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 01:07:07.513 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 01:07:07.513 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:07.513 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:07.513 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:07.513 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:07.513 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:07.513 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:07.514 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:07.514 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:07.514 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 01:07:07.514 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:07:07.514 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 01:07:07.514 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:07.514 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:07.514 06:06:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:07.514 [2024-12-09 06:06:02.025819] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:07:07.514 [2024-12-09 06:06:02.026892] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:07.514 [2024-12-09 06:06:02.026935] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:07.514 [2024-12-09 06:06:02.026975] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 01:07:07.514 [2024-12-09 06:06:02.026990] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:07.514 [2024-12-09 06:06:02.033697] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 01:07:07.514 [2024-12-09 06:06:02.033882] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:07.514 [2024-12-09 06:06:02.033933] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:07.514 06:06:02 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 01:07:07.773 [2024-12-09 06:06:02.164991] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 01:07:07.773 [2024-12-09 06:06:02.165443] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 01:07:07.773 [2024-12-09 06:06:02.228467] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 01:07:07.773 [2024-12-09 06:06:02.228557] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 01:07:07.773 [2024-12-09 06:06:02.228569] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:07:07.773 [2024-12-09 06:06:02.228575] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:07:07.773 [2024-12-09 06:06:02.228593] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:07.773 [2024-12-09 06:06:02.228833] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 01:07:07.773 [2024-12-09 06:06:02.228863] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 01:07:07.773 [2024-12-09 06:06:02.228872] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 01:07:07.773 [2024-12-09 06:06:02.228877] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:07:07.773 [2024-12-09 06:06:02.228894] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:07:07.773 [2024-12-09 06:06:02.274149] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 01:07:07.773 [2024-12-09 06:06:02.274172] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:07:07.773 [2024-12-09 06:06:02.274229] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 01:07:07.773 [2024-12-09 06:06:02.274238] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:08.710 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:08.971 [2024-12-09 06:06:03.371062] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:08.971 [2024-12-09 06:06:03.371101] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:08.971 [2024-12-09 06:06:03.371138] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 01:07:08.971 [2024-12-09 06:06:03.371151] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:08.971 [2024-12-09 06:06:03.376058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:07:08.971 [2024-12-09 06:06:03.376115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.971 [2024-12-09 06:06:03.376161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:07:08.971 [2024-12-09 06:06:03.376171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.971 [2024-12-09 06:06:03.376182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:07:08.971 [2024-12-09 06:06:03.376191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.971 [2024-12-09 06:06:03.376201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:07:08.971 [2024-12-09 06:06:03.376210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.971 [2024-12-09 06:06:03.376219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.971 [2024-12-09 06:06:03.379045] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 01:07:08.971 [2024-12-09 06:06:03.379106] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 01:07:08.971 [2024-12-09 06:06:03.380802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:07:08.971 [2024-12-09 06:06:03.380838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.971 [2024-12-09 06:06:03.380851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:07:08.971 [2024-12-09 06:06:03.380861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.971 [2024-12-09 06:06:03.380871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:07:08.971 [2024-12-09 06:06:03.380880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.971 [2024-12-09 06:06:03.380892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:07:08.971 [2024-12-09 06:06:03.380902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:07:08.971 [2024-12-09 06:06:03.380911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:08.971 06:06:03 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 01:07:08.971 [2024-12-09 06:06:03.386014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.971 [2024-12-09 06:06:03.390766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.971 [2024-12-09 06:06:03.396028] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.971 [2024-12-09 06:06:03.396066] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.971 [2024-12-09 06:06:03.396089] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.971 [2024-12-09 06:06:03.396095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.971 [2024-12-09 06:06:03.396141] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.971 [2024-12-09 06:06:03.396222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.971 [2024-12-09 06:06:03.396244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.971 [2024-12-09 06:06:03.396255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.971 [2024-12-09 06:06:03.396290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.971 [2024-12-09 06:06:03.396307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.971 [2024-12-09 06:06:03.396316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.971 [2024-12-09 06:06:03.396342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.971 [2024-12-09 06:06:03.396351] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.971 [2024-12-09 06:06:03.396358] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.396363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.400788] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.400826] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.400849] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.400855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.400897] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.400955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.400976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.972 [2024-12-09 06:06:03.400987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.401004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.401018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.401027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.401036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.401045] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.401051] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.401056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.406150] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.406189] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.406211] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.406217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.406245] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.406314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.406335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.972 [2024-12-09 06:06:03.406347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.406362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.406377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.406386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.406395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.406403] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.406409] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.406414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.410906] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.410931] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.410938] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.410944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.410971] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.411024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.411045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.972 [2024-12-09 06:06:03.411056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.411072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.411086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.411095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.411105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.411113] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.411119] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.411124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.416252] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.416302] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.416308] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.416313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.416355] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.416422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.416443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.972 [2024-12-09 06:06:03.416455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.416471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.416485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.416493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.416502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.416511] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.416516] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.416521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.420995] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.421021] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.421027] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.421033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.421059] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.421111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.421132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.972 [2024-12-09 06:06:03.421143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.421159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.421184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.421195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.421204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.421212] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.421218] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.421223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.426364] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.426408] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.426431] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.426436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.426480] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.426535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.426559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.972 [2024-12-09 06:06:03.426570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.426586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.426601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.426610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.426619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.426628] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.426633] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.426638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.431091] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.431129] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.431135] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.431140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.431179] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.431230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.431250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.972 [2024-12-09 06:06:03.431261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.431276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.431309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.431319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.431327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.431335] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.431341] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.431345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.436510] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.436534] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.436541] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.436546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.436573] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.436625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.436657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.972 [2024-12-09 06:06:03.436670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.436686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.436701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.436710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.436719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.436727] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.436733] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.436738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.441188] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.441226] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.441232] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.441237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.441293] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.441344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.441364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.972 [2024-12-09 06:06:03.441374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.441425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.441441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.441450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.441459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.441467] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.441473] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.441478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.972 [2024-12-09 06:06:03.446583] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.972 [2024-12-09 06:06:03.446607] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.972 [2024-12-09 06:06:03.446614] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.446619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.972 [2024-12-09 06:06:03.446677] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.972 [2024-12-09 06:06:03.446732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.972 [2024-12-09 06:06:03.446753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.972 [2024-12-09 06:06:03.446764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.972 [2024-12-09 06:06:03.446780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.972 [2024-12-09 06:06:03.446793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.972 [2024-12-09 06:06:03.446802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.972 [2024-12-09 06:06:03.446812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.972 [2024-12-09 06:06:03.446820] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.972 [2024-12-09 06:06:03.446826] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.972 [2024-12-09 06:06:03.446831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.451303] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.451356] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.451364] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.451369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.451412] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.451489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.451511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.973 [2024-12-09 06:06:03.451523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.451538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.451552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.451561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.451570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.451579] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.451584] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.451589] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.456678] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.456710] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.456717] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.456722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.456750] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.456803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.456838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.973 [2024-12-09 06:06:03.456849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.456864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.456878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.456886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.456910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.456918] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.456923] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.456928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.461422] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.461460] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.461467] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.461489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.461533] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.461614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.973 [2024-12-09 06:06:03.461625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.461641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.461669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.461678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.461688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.461696] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.461702] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.461707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.466762] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.466790] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.466796] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.466802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.466830] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.466885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.466906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.973 [2024-12-09 06:06:03.466918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.466934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.466948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.466957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.466966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.466975] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.466981] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.466986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.471543] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.471572] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.471579] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.471584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.471614] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.471682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.471705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.973 [2024-12-09 06:06:03.471716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.471733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.471748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.471757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.471766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.471775] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.471781] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.471786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.476840] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.476895] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.476902] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.476907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.476934] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.476986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.477007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.973 [2024-12-09 06:06:03.477018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.477033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.477048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.477057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.477066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.477074] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.477080] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.477085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.481623] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.481658] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.481666] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.481671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.481699] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.481751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.481772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.973 [2024-12-09 06:06:03.481783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.481799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.481813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.481822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.481831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.481839] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.481845] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.481850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.486944] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.486968] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.486975] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.486981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.487021] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.487100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.487120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.973 [2024-12-09 06:06:03.487130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.487145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.487157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.487166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.487174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.487181] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.487187] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.487191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.491710] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.491756] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.491763] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.491769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.491809] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.491888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.491907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.973 [2024-12-09 06:06:03.491917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.491931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.491944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.491952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.491960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.491967] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.491973] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.491977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.497030] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.497067] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.973 [2024-12-09 06:06:03.497073] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.497078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.973 [2024-12-09 06:06:03.497118] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.973 [2024-12-09 06:06:03.497168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.973 [2024-12-09 06:06:03.497187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.973 [2024-12-09 06:06:03.497198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.973 [2024-12-09 06:06:03.497213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.973 [2024-12-09 06:06:03.497236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.973 [2024-12-09 06:06:03.497245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.973 [2024-12-09 06:06:03.497254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.973 [2024-12-09 06:06:03.497262] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.973 [2024-12-09 06:06:03.497267] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.973 [2024-12-09 06:06:03.497272] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.973 [2024-12-09 06:06:03.501835] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 01:07:08.973 [2024-12-09 06:06:03.501872] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 01:07:08.974 [2024-12-09 06:06:03.501879] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 01:07:08.974 [2024-12-09 06:06:03.501884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 01:07:08.974 [2024-12-09 06:06:03.501924] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 01:07:08.974 [2024-12-09 06:06:03.501974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.974 [2024-12-09 06:06:03.501994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1616900 with addr=10.0.0.4, port=4420 01:07:08.974 [2024-12-09 06:06:03.502004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1616900 is same with the state(6) to be set 01:07:08.974 [2024-12-09 06:06:03.502020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1616900 (9): Bad file descriptor 01:07:08.974 [2024-12-09 06:06:03.502033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 01:07:08.974 [2024-12-09 06:06:03.502042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 01:07:08.974 [2024-12-09 06:06:03.502065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 01:07:08.974 [2024-12-09 06:06:03.502073] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 01:07:08.974 [2024-12-09 06:06:03.502078] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 01:07:08.974 [2024-12-09 06:06:03.502083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 01:07:08.974 [2024-12-09 06:06:03.507127] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 01:07:08.974 [2024-12-09 06:06:03.507164] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 01:07:08.974 [2024-12-09 06:06:03.507170] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 01:07:08.974 [2024-12-09 06:06:03.507175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:07:08.974 [2024-12-09 06:06:03.507214] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 01:07:08.974 [2024-12-09 06:06:03.507263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:07:08.974 [2024-12-09 06:06:03.507283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15a2780 with addr=10.0.0.3, port=4420 01:07:08.974 [2024-12-09 06:06:03.507293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a2780 is same with the state(6) to be set 01:07:08.974 [2024-12-09 06:06:03.507309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a2780 (9): Bad file descriptor 01:07:08.974 [2024-12-09 06:06:03.507322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 01:07:08.974 [2024-12-09 06:06:03.507330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 01:07:08.974 [2024-12-09 06:06:03.507339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 01:07:08.974 [2024-12-09 06:06:03.507346] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 01:07:08.974 [2024-12-09 06:06:03.507352] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 01:07:08.974 [2024-12-09 06:06:03.507356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 01:07:08.974 [2024-12-09 06:06:03.510455] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 01:07:08.974 [2024-12-09 06:06:03.510508] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:07:08.974 [2024-12-09 06:06:03.510530] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:08.974 [2024-12-09 06:06:03.510569] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 01:07:08.974 [2024-12-09 06:06:03.510587] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:07:08.974 [2024-12-09 06:06:03.510602] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:07:09.232 [2024-12-09 06:06:03.596581] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:07:09.232 [2024-12-09 06:06:03.596678] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:10.166 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 01:07:10.167 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 01:07:10.167 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 01:07:10.167 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 01:07:10.167 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:10.167 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:10.167 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:10.167 06:06:04 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 01:07:10.424 [2024-12-09 06:06:04.815600] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:11.369 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:11.627 [2024-12-09 06:06:05.970170] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 01:07:11.627 2024/12/09 06:06:05 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 01:07:11.627 request: 01:07:11.627 { 01:07:11.627 "method": "bdev_nvme_start_mdns_discovery", 01:07:11.627 "params": { 01:07:11.627 "name": "mdns", 01:07:11.627 "svcname": "_nvme-disc._http", 01:07:11.627 "hostnqn": "nqn.2021-12.io.spdk:test" 01:07:11.627 } 01:07:11.627 } 01:07:11.627 Got JSON-RPC error response 01:07:11.627 GoRPCClient: error on JSON-RPC call 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:11.627 06:06:05 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 01:07:12.194 [2024-12-09 06:06:06.558965] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 01:07:12.194 [2024-12-09 06:06:06.658959] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 01:07:12.194 [2024-12-09 06:06:06.758967] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:07:12.194 [2024-12-09 06:06:06.759006] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 01:07:12.194 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:12.194 cookie is 0 01:07:12.194 is_local: 1 01:07:12.194 our_own: 0 01:07:12.194 wide_area: 0 01:07:12.194 multicast: 1 01:07:12.194 cached: 1 01:07:12.452 [2024-12-09 06:06:06.858965] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:07:12.452 [2024-12-09 06:06:06.858990] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 01:07:12.452 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:12.452 cookie is 0 01:07:12.452 is_local: 1 01:07:12.452 our_own: 0 01:07:12.452 wide_area: 0 01:07:12.452 multicast: 1 01:07:12.452 cached: 1 01:07:12.452 [2024-12-09 06:06:06.859018] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 01:07:12.452 [2024-12-09 06:06:06.958966] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 01:07:12.452 [2024-12-09 06:06:06.958992] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 01:07:12.452 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:12.452 cookie is 0 01:07:12.452 is_local: 1 01:07:12.452 our_own: 0 01:07:12.452 wide_area: 0 01:07:12.452 multicast: 1 01:07:12.452 cached: 1 01:07:12.713 [2024-12-09 06:06:07.058965] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 01:07:12.713 [2024-12-09 06:06:07.058989] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 01:07:12.713 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:12.713 cookie is 0 01:07:12.713 is_local: 1 01:07:12.713 our_own: 0 01:07:12.713 wide_area: 0 01:07:12.713 multicast: 1 01:07:12.713 cached: 1 01:07:12.713 [2024-12-09 06:06:07.059031] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 01:07:13.281 [2024-12-09 06:06:07.766494] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 01:07:13.281 [2024-12-09 06:06:07.766524] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 01:07:13.281 [2024-12-09 06:06:07.766560] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 01:07:13.281 [2024-12-09 06:06:07.852591] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 01:07:13.540 [2024-12-09 06:06:07.910997] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 01:07:13.540 [2024-12-09 06:06:07.911602] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x15f63e0:1 started. 01:07:13.540 [2024-12-09 06:06:07.913011] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 01:07:13.540 [2024-12-09 06:06:07.913039] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 01:07:13.540 [2024-12-09 06:06:07.915257] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x15f63e0 was disconnected and freed. delete nvme_qpair. 01:07:13.540 [2024-12-09 06:06:07.966553] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 01:07:13.540 [2024-12-09 06:06:07.966577] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 01:07:13.540 [2024-12-09 06:06:07.966613] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 01:07:13.540 [2024-12-09 06:06:08.052668] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 01:07:13.540 [2024-12-09 06:06:08.111009] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 01:07:13.540 [2024-12-09 06:06:08.111576] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x16325b0:1 started. 01:07:13.540 [2024-12-09 06:06:08.112889] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 01:07:13.540 [2024-12-09 06:06:08.112916] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 01:07:13.540 [2024-12-09 06:06:08.115252] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x16325b0 was disconnected and freed. delete nvme_qpair. 01:07:16.828 06:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 01:07:16.828 06:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 01:07:16.828 06:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:16.828 06:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.828 06:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 01:07:16.828 06:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 01:07:16.828 06:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 01:07:16.828 06:06:10 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.828 [2024-12-09 06:06:11.166752] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 01:07:16.828 2024/12/09 06:06:11 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 01:07:16.828 request: 01:07:16.828 { 01:07:16.828 "method": "bdev_nvme_start_mdns_discovery", 01:07:16.828 "params": { 01:07:16.828 "name": "cdc", 01:07:16.828 "svcname": "_nvme-disc._tcp", 01:07:16.828 "hostnqn": "nqn.2021-12.io.spdk:test" 01:07:16.828 } 01:07:16.828 } 01:07:16.828 Got JSON-RPC error response 01:07:16.828 GoRPCClient: error on JSON-RPC call 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:16.828 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 01:07:16.829 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:07:16.829 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 01:07:16.829 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:07:16.829 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:16.829 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:16.829 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:16.829 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:16.829 06:06:11 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 01:07:16.829 [2024-12-09 06:06:11.358974] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 01:07:17.765 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 01:07:17.765 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 01:07:17.765 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 01:07:17.765 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 01:07:17.765 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 01:07:17.765 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 01:07:17.765 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:07:18.025 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 01:07:18.025 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 01:07:18.025 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 94190 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 94190 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 94201 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 01:07:18.025 Got SIGTERM, quitting. 01:07:18.025 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 01:07:18.025 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 01:07:18.025 avahi-daemon 0.8 exiting. 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:07:18.025 rmmod nvme_tcp 01:07:18.025 rmmod nvme_fabrics 01:07:18.025 rmmod nvme_keyring 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 94148 ']' 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 94148 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 94148 ']' 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 94148 01:07:18.025 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94148 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:07:18.283 killing process with pid 94148 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94148' 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 94148 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 94148 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:07:18.283 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:07:18.284 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:07:18.543 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:07:18.543 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:07:18.543 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:07:18.543 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:07:18.543 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:18.543 06:06:12 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 01:07:18.543 01:07:18.543 real 0m21.671s 01:07:18.543 user 0m42.462s 01:07:18.543 sys 0m2.120s 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 01:07:18.543 ************************************ 01:07:18.543 END TEST nvmf_mdns_discovery 01:07:18.543 ************************************ 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:07:18.543 ************************************ 01:07:18.543 START TEST nvmf_host_multipath 01:07:18.543 ************************************ 01:07:18.543 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 01:07:18.848 * Looking for test storage... 01:07:18.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:07:18.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:18.848 --rc genhtml_branch_coverage=1 01:07:18.848 --rc genhtml_function_coverage=1 01:07:18.848 --rc genhtml_legend=1 01:07:18.848 --rc geninfo_all_blocks=1 01:07:18.848 --rc geninfo_unexecuted_blocks=1 01:07:18.848 01:07:18.848 ' 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:07:18.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:18.848 --rc genhtml_branch_coverage=1 01:07:18.848 --rc genhtml_function_coverage=1 01:07:18.848 --rc genhtml_legend=1 01:07:18.848 --rc geninfo_all_blocks=1 01:07:18.848 --rc geninfo_unexecuted_blocks=1 01:07:18.848 01:07:18.848 ' 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:07:18.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:18.848 --rc genhtml_branch_coverage=1 01:07:18.848 --rc genhtml_function_coverage=1 01:07:18.848 --rc genhtml_legend=1 01:07:18.848 --rc geninfo_all_blocks=1 01:07:18.848 --rc geninfo_unexecuted_blocks=1 01:07:18.848 01:07:18.848 ' 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:07:18.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:07:18.848 --rc genhtml_branch_coverage=1 01:07:18.848 --rc genhtml_function_coverage=1 01:07:18.848 --rc genhtml_legend=1 01:07:18.848 --rc geninfo_all_blocks=1 01:07:18.848 --rc geninfo_unexecuted_blocks=1 01:07:18.848 01:07:18.848 ' 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:18.848 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:07:18.849 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:07:18.849 Cannot find device "nvmf_init_br" 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:07:18.849 Cannot find device "nvmf_init_br2" 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:07:18.849 Cannot find device "nvmf_tgt_br" 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:07:18.849 Cannot find device "nvmf_tgt_br2" 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:07:18.849 Cannot find device "nvmf_init_br" 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 01:07:18.849 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:07:19.128 Cannot find device "nvmf_init_br2" 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:07:19.128 Cannot find device "nvmf_tgt_br" 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:07:19.128 Cannot find device "nvmf_tgt_br2" 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:07:19.128 Cannot find device "nvmf_br" 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:07:19.128 Cannot find device "nvmf_init_if" 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:07:19.128 Cannot find device "nvmf_init_if2" 01:07:19.128 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:07:19.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:07:19.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:07:19.129 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:07:19.129 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 01:07:19.129 01:07:19.129 --- 10.0.0.3 ping statistics --- 01:07:19.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:19.129 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:07:19.129 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:07:19.129 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 01:07:19.129 01:07:19.129 --- 10.0.0.4 ping statistics --- 01:07:19.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:19.129 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 01:07:19.129 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:07:19.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:07:19.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 01:07:19.388 01:07:19.388 --- 10.0.0.1 ping statistics --- 01:07:19.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:19.388 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:07:19.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:07:19.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 01:07:19.388 01:07:19.388 --- 10.0.0.2 ping statistics --- 01:07:19.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:07:19.388 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 01:07:19.388 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=94844 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 94844 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 94844 ']' 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:19.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:19.389 06:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:19.389 [2024-12-09 06:06:13.813781] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:07:19.389 [2024-12-09 06:06:13.813882] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:07:19.389 [2024-12-09 06:06:13.967295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:07:19.649 [2024-12-09 06:06:14.006275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:07:19.649 [2024-12-09 06:06:14.006345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:07:19.649 [2024-12-09 06:06:14.006371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:07:19.649 [2024-12-09 06:06:14.006381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:07:19.649 [2024-12-09 06:06:14.006390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:07:19.649 [2024-12-09 06:06:14.007297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:07:19.649 [2024-12-09 06:06:14.007313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:07:19.649 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:19.649 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 01:07:19.649 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:07:19.649 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:07:19.649 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:19.649 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:07:19.649 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94844 01:07:19.649 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:07:19.909 [2024-12-09 06:06:14.433363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:07:19.909 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:07:20.476 Malloc0 01:07:20.476 06:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 01:07:20.733 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:07:20.990 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:07:21.249 [2024-12-09 06:06:15.604071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:07:21.249 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:07:21.508 [2024-12-09 06:06:15.848155] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94931 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 01:07:21.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94931 /var/tmp/bdevperf.sock 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 94931 ']' 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:07:21.508 06:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:07:21.767 06:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:07:21.767 06:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 01:07:21.767 06:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:07:22.026 06:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:07:22.594 Nvme0n1 01:07:22.594 06:06:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 01:07:22.852 Nvme0n1 01:07:22.852 06:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 01:07:22.852 06:06:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 01:07:23.788 06:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 01:07:23.788 06:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:07:24.354 06:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:07:24.613 06:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 01:07:24.613 06:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:24.613 06:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95010 01:07:24.613 06:06:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:07:31.177 06:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:07:31.177 06:06:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:07:31.177 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:07:31.177 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:31.177 Attaching 4 probes... 01:07:31.177 @path[10.0.0.3, 4421]: 17300 01:07:31.178 @path[10.0.0.3, 4421]: 18186 01:07:31.178 @path[10.0.0.3, 4421]: 18010 01:07:31.178 @path[10.0.0.3, 4421]: 18255 01:07:31.178 @path[10.0.0.3, 4421]: 18421 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95010 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:07:31.178 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:07:31.435 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 01:07:31.435 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95146 01:07:31.435 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:07:31.435 06:06:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:38.001 06:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:07:38.001 06:06:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:38.001 Attaching 4 probes... 01:07:38.001 @path[10.0.0.3, 4420]: 18061 01:07:38.001 @path[10.0.0.3, 4420]: 18438 01:07:38.001 @path[10.0.0.3, 4420]: 18096 01:07:38.001 @path[10.0.0.3, 4420]: 17893 01:07:38.001 @path[10.0.0.3, 4420]: 18085 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95146 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 01:07:38.001 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:07:38.258 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:07:38.515 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 01:07:38.515 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:38.515 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95278 01:07:38.515 06:06:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:07:45.075 06:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:07:45.075 06:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:45.075 Attaching 4 probes... 01:07:45.075 @path[10.0.0.3, 4421]: 14190 01:07:45.075 @path[10.0.0.3, 4421]: 17753 01:07:45.075 @path[10.0.0.3, 4421]: 17131 01:07:45.075 @path[10.0.0.3, 4421]: 17370 01:07:45.075 @path[10.0.0.3, 4421]: 17765 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95278 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:07:45.075 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 01:07:45.334 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 01:07:45.334 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95408 01:07:45.334 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:07:45.334 06:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:51.898 06:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:07:51.898 06:06:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:51.898 Attaching 4 probes... 01:07:51.898 01:07:51.898 01:07:51.898 01:07:51.898 01:07:51.898 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95408 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:07:51.898 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:07:52.157 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 01:07:52.157 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95543 01:07:52.157 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:52.157 06:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:07:58.742 06:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:07:58.742 06:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:58.742 Attaching 4 probes... 01:07:58.742 @path[10.0.0.3, 4421]: 16991 01:07:58.742 @path[10.0.0.3, 4421]: 16713 01:07:58.742 @path[10.0.0.3, 4421]: 16158 01:07:58.742 @path[10.0.0.3, 4421]: 16401 01:07:58.742 @path[10.0.0.3, 4421]: 15863 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95543 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:07:58.742 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:07:59.003 [2024-12-09 06:06:53.395119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.003 [2024-12-09 06:06:53.395326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.395997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.396005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.396013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.396022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.004 [2024-12-09 06:06:53.396030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 [2024-12-09 06:06:53.396292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213ae90 is same with the state(6) to be set 01:07:59.005 06:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 01:07:59.949 06:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 01:07:59.949 06:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95680 01:07:59.949 06:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:07:59.949 06:06:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:06.507 Attaching 4 probes... 01:08:06.507 @path[10.0.0.3, 4420]: 16124 01:08:06.507 @path[10.0.0.3, 4420]: 16035 01:08:06.507 @path[10.0.0.3, 4420]: 16323 01:08:06.507 @path[10.0.0.3, 4420]: 16607 01:08:06.507 @path[10.0.0.3, 4420]: 16066 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95680 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:06.507 06:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:08:06.507 [2024-12-09 06:07:01.010399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:08:06.507 06:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 01:08:07.077 06:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 01:08:13.687 06:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 01:08:13.687 06:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95867 01:08:13.687 06:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94844 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 01:08:13.687 06:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 01:08:18.957 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 01:08:18.957 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 01:08:19.216 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 01:08:19.216 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:19.216 Attaching 4 probes... 01:08:19.216 @path[10.0.0.3, 4421]: 16793 01:08:19.216 @path[10.0.0.3, 4421]: 17413 01:08:19.216 @path[10.0.0.3, 4421]: 16792 01:08:19.216 @path[10.0.0.3, 4421]: 16394 01:08:19.216 @path[10.0.0.3, 4421]: 16213 01:08:19.216 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 01:08:19.216 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 01:08:19.216 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 01:08:19.216 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 01:08:19.216 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95867 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94931 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 94931 ']' 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 94931 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94931 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:08:19.217 killing process with pid 94931 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94931' 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 94931 01:08:19.217 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 94931 01:08:19.217 { 01:08:19.217 "results": [ 01:08:19.217 { 01:08:19.217 "job": "Nvme0n1", 01:08:19.217 "core_mask": "0x4", 01:08:19.217 "workload": "verify", 01:08:19.217 "status": "terminated", 01:08:19.217 "verify_range": { 01:08:19.217 "start": 0, 01:08:19.217 "length": 16384 01:08:19.217 }, 01:08:19.217 "queue_depth": 128, 01:08:19.217 "io_size": 4096, 01:08:19.217 "runtime": 56.330534, 01:08:19.217 "iops": 7320.346013407222, 01:08:19.217 "mibps": 28.59510161487196, 01:08:19.217 "io_failed": 0, 01:08:19.217 "io_timeout": 0, 01:08:19.217 "avg_latency_us": 17453.413786371937, 01:08:19.217 "min_latency_us": 793.1345454545454, 01:08:19.217 "max_latency_us": 7076934.749090909 01:08:19.217 } 01:08:19.217 ], 01:08:19.217 "core_count": 1 01:08:19.217 } 01:08:19.559 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94931 01:08:19.559 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:08:19.559 [2024-12-09 06:06:15.930409] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:19.559 [2024-12-09 06:06:15.930512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94931 ] 01:08:19.559 [2024-12-09 06:06:16.083219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:19.559 [2024-12-09 06:06:16.123620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:19.559 Running I/O for 90 seconds... 01:08:19.559 8838.00 IOPS, 34.52 MiB/s [2024-12-09T06:07:14.145Z] 9062.50 IOPS, 35.40 MiB/s [2024-12-09T06:07:14.145Z] 8984.67 IOPS, 35.10 MiB/s [2024-12-09T06:07:14.145Z] 8996.50 IOPS, 35.14 MiB/s [2024-12-09T06:07:14.145Z] 8998.80 IOPS, 35.15 MiB/s [2024-12-09T06:07:14.145Z] 9021.17 IOPS, 35.24 MiB/s [2024-12-09T06:07:14.145Z] 9056.71 IOPS, 35.38 MiB/s [2024-12-09T06:07:14.145Z] 9004.12 IOPS, 35.17 MiB/s [2024-12-09T06:07:14.145Z] [2024-12-09 06:06:25.913620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.913701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.913763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.913787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.913813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.913831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.913854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.913871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.913894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.913910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.913933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.913950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.913972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.913988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.559 [2024-12-09 06:06:25.914684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.559 [2024-12-09 06:06:25.914762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.559 [2024-12-09 06:06:25.914786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.559 [2024-12-09 06:06:25.914809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.914834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.914851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.914873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.914891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.914913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.914930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.914952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.914969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.914992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.915621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.915701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.915750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.915791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.915830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.915871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.915927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.915967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.915985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.916950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.916974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.917162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.917196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.917215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.917253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.560 [2024-12-09 06:06:25.917273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:19.560 [2024-12-09 06:06:25.917296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.917320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.917361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.917959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.917982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.918002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.561 [2024-12-09 06:06:25.918042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.561 [2024-12-09 06:06:25.918865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.561 [2024-12-09 06:06:25.918887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.918904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.918926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.918943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.918966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.919436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.919454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.920974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.920992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.921014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.921031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.921053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.921074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.921097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.921115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.921138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.921155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:25.921179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:25.921196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.562 8971.67 IOPS, 35.05 MiB/s [2024-12-09T06:07:14.148Z] 8983.90 IOPS, 35.09 MiB/s [2024-12-09T06:07:14.148Z] 9006.64 IOPS, 35.18 MiB/s [2024-12-09T06:07:14.148Z] 9007.50 IOPS, 35.19 MiB/s [2024-12-09T06:07:14.148Z] 9007.46 IOPS, 35.19 MiB/s [2024-12-09T06:07:14.148Z] 9008.14 IOPS, 35.19 MiB/s [2024-12-09T06:07:14.148Z] 8993.60 IOPS, 35.13 MiB/s [2024-12-09T06:07:14.148Z] [2024-12-09 06:06:32.639789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:32.639859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:32.639921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:32.639947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:32.639973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:32.639991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:32.640013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:32.640083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.562 [2024-12-09 06:06:32.640107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.562 [2024-12-09 06:06:32.640124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.640530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.640546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.641192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.641549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.641588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.641626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.641716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.641761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.641802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.563 [2024-12-09 06:06:32.641843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.641967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:19.563 [2024-12-09 06:06:32.641991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.563 [2024-12-09 06:06:32.642008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.564 [2024-12-09 06:06:32.642064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.564 [2024-12-09 06:06:32.642117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.564 [2024-12-09 06:06:32.642154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.564 [2024-12-09 06:06:32.642192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.564 [2024-12-09 06:06:32.642290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.642960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.642984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.564 [2024-12-09 06:06:32.643579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.564 [2024-12-09 06:06:32.643601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.643618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.643640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.643656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.643689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.643709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.643874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.643900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.643930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.643948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.643974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.643990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.644961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.644987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.565 [2024-12-09 06:06:32.645416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.565 [2024-12-09 06:06:32.645433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:32.645475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:32.645517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:32.645560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.645965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.645991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.646007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.646032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.646049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.646074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.646091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.646116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.646132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.646158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.646178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:32.646213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:32.646231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.566 8548.25 IOPS, 33.39 MiB/s [2024-12-09T06:07:14.152Z] 8448.29 IOPS, 33.00 MiB/s [2024-12-09T06:07:14.152Z] 8472.06 IOPS, 33.09 MiB/s [2024-12-09T06:07:14.152Z] 8478.16 IOPS, 33.12 MiB/s [2024-12-09T06:07:14.152Z] 8488.65 IOPS, 33.16 MiB/s [2024-12-09T06:07:14.152Z] 8506.81 IOPS, 33.23 MiB/s [2024-12-09T06:07:14.152Z] 8533.32 IOPS, 33.33 MiB/s [2024-12-09T06:07:14.152Z] [2024-12-09 06:06:39.833414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.566 [2024-12-09 06:06:39.833482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.833535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.833554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.833577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.833593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.834982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.834999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.835035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.835067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.835088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.835104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.835124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.566 [2024-12-09 06:06:39.835140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.566 [2024-12-09 06:06:39.835160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.835937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.835955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.836977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.836999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.837016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.837052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.837069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.837090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.567 [2024-12-09 06:06:39.837107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.567 [2024-12-09 06:06:39.837128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.837971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.837993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.568 [2024-12-09 06:06:39.838452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.568 [2024-12-09 06:06:39.838473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.838970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.838987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.839024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.839072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.839098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.839116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.839880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.839910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.839938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.839956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.839978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.839995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:19.569 [2024-12-09 06:06:39.840897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.569 [2024-12-09 06:06:39.840914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.840934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.840950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.840971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.570 [2024-12-09 06:06:39.840987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.841960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.841977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.842438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.842454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.843252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.843281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.843307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.843325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.843346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.843362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.843382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.570 [2024-12-09 06:06:39.843398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.570 [2024-12-09 06:06:39.843418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.843977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.843994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.844973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.844994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.845011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.845031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.571 [2024-12-09 06:06:39.845047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.571 [2024-12-09 06:06:39.845069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.845086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.845106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.845123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.845146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.845179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.855972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.855989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.856011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.856043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.856078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.856095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.856116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.856132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.856153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.856169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.856189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.856206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.856228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.856244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:19.572 [2024-12-09 06:06:39.857910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.572 [2024-12-09 06:06:39.857927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.857949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.857966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.857988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.573 [2024-12-09 06:06:39.858344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.858972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.858994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.573 [2024-12-09 06:06:39.859688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.573 [2024-12-09 06:06:39.859728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.859755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.859773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.859796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.859813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.860968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.860990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.861963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.861981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.862003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.862020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.862071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.862088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.862109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.862125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.862145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.862162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.862183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.862199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.862228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.862260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.862296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.862313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.574 [2024-12-09 06:06:39.862334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.574 [2024-12-09 06:06:39.862350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.862981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.862999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.863021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.863038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.863075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.863091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.863113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.863129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.863151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.863167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.863189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.863206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.863242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.863273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.863294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.863317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.863339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.863356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.864975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.864993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.865015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.575 [2024-12-09 06:06:39.865047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:19.575 [2024-12-09 06:06:39.865069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.576 [2024-12-09 06:06:39.865447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.865966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.865983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.576 [2024-12-09 06:06:39.866729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.576 [2024-12-09 06:06:39.866747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.866780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.866799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.866823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.866840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.866864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.866881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.867655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.867701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.867730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.867767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.867791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.867809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.867833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.867850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.867873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.867891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.867914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.867931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.867953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.867970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.867993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.868967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.868990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.577 [2024-12-09 06:06:39.869358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.577 [2024-12-09 06:06:39.869375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.869960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.869982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.870024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.870064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.870116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.870158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.870198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.870239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.870279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.870319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.870337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.871966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.871988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.872005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.872041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.578 [2024-12-09 06:06:39.872073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:19.578 [2024-12-09 06:06:39.872094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.579 [2024-12-09 06:06:39.872406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.872965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.872988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.873739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.873757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.874478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.874506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.874533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.874551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.874573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.874590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.874611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.874627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.874648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.874718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.579 [2024-12-09 06:06:39.874747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.579 [2024-12-09 06:06:39.874766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.874789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.874807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.874829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.874847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.874869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.874889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.874913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.874931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.874953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.874971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.874993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.875963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.875980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.580 [2024-12-09 06:06:39.876557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.580 [2024-12-09 06:06:39.876579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.876971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.876989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.877011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.877043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.877064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.877081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.877103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.877119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.877141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.877158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.877180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.877197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.877985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.878967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.878990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.581 [2024-12-09 06:06:39.879349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.581 [2024-12-09 06:06:39.879407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.581 [2024-12-09 06:06:39.879423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.879965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.879987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.880600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.880617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.881979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.881996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.882032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.882064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.882085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.882101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.882122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.882138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.882159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.882175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.582 [2024-12-09 06:06:39.882196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.582 [2024-12-09 06:06:39.882213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.882964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.882987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.883961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.883984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.884002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.884025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.884043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.884831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.884861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.884889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.884909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.884933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.884951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.884973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.884990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.885013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.885045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.885081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.583 [2024-12-09 06:06:39.885098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 01:08:19.583 [2024-12-09 06:06:39.885119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.885973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.885995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.584 [2024-12-09 06:06:39.886228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.886980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.886997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.887020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.584 [2024-12-09 06:06:39.887037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 01:08:19.584 [2024-12-09 06:06:39.887088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.887978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.887995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.888969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.888995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 01:08:19.585 [2024-12-09 06:06:39.889638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.585 [2024-12-09 06:06:39.889655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.889710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.889728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.889753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.889770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.889795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.889812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.889837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.889864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.889892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.889910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.889935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.889952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.889977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.889994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:39.890909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:39.890935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 01:08:19.586 8322.65 IOPS, 32.51 MiB/s [2024-12-09T06:07:14.172Z] 7975.88 IOPS, 31.16 MiB/s [2024-12-09T06:07:14.172Z] 7656.84 IOPS, 29.91 MiB/s [2024-12-09T06:07:14.172Z] 7362.35 IOPS, 28.76 MiB/s [2024-12-09T06:07:14.172Z] 7089.67 IOPS, 27.69 MiB/s [2024-12-09T06:07:14.172Z] 6836.46 IOPS, 26.70 MiB/s [2024-12-09T06:07:14.172Z] 6600.72 IOPS, 25.78 MiB/s [2024-12-09T06:07:14.172Z] 6529.00 IOPS, 25.50 MiB/s [2024-12-09T06:07:14.172Z] 6599.26 IOPS, 25.78 MiB/s [2024-12-09T06:07:14.172Z] 6649.69 IOPS, 25.98 MiB/s [2024-12-09T06:07:14.172Z] 6692.06 IOPS, 26.14 MiB/s [2024-12-09T06:07:14.172Z] 6736.26 IOPS, 26.31 MiB/s [2024-12-09T06:07:14.172Z] 6773.14 IOPS, 26.46 MiB/s [2024-12-09T06:07:14.172Z] [2024-12-09 06:06:53.396997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:19.586 [2024-12-09 06:06:53.397059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:19.586 [2024-12-09 06:06:53.397091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:19.586 [2024-12-09 06:06:53.397117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:19.586 [2024-12-09 06:06:53.397167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110dd90 is same with the state(6) to be set 01:08:19.586 [2024-12-09 06:06:53.397260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.586 [2024-12-09 06:06:53.397967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.586 [2024-12-09 06:06:53.397983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.586 [2024-12-09 06:06:53.397997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.587 [2024-12-09 06:06:53.398760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.398976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.398992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.587 [2024-12-09 06:06:53.399505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.587 [2024-12-09 06:06:53.399519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.399977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.399992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.400028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.400058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.400089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.400119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.400149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.400178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.400208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:19.588 [2024-12-09 06:06:53.400237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.400980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.400994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.401009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.401023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.401039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.401058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.401074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.401090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.401105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.401135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.401149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.401165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.401179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.401194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.588 [2024-12-09 06:06:53.401214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.588 [2024-12-09 06:06:53.401231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.589 [2024-12-09 06:06:53.401245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.589 [2024-12-09 06:06:53.401261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:19.589 [2024-12-09 06:06:53.401274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.589 [2024-12-09 06:06:53.401310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:19.589 [2024-12-09 06:06:53.401325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:19.589 [2024-12-09 06:06:53.401336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125672 len:8 PRP1 0x0 PRP2 0x0 01:08:19.589 [2024-12-09 06:06:53.401350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:19.589 [2024-12-09 06:06:53.402701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:08:19.589 [2024-12-09 06:06:53.402743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110dd90 (9): Bad file descriptor 01:08:19.589 [2024-12-09 06:06:53.402853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:19.589 [2024-12-09 06:06:53.402885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x110dd90 with addr=10.0.0.3, port=4421 01:08:19.589 [2024-12-09 06:06:53.402903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110dd90 is same with the state(6) to be set 01:08:19.589 [2024-12-09 06:06:53.402929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x110dd90 (9): Bad file descriptor 01:08:19.589 [2024-12-09 06:06:53.402954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:08:19.589 [2024-12-09 06:06:53.402970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:08:19.589 [2024-12-09 06:06:53.402986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:08:19.589 [2024-12-09 06:06:53.403000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:08:19.589 [2024-12-09 06:06:53.403015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:08:19.589 6810.56 IOPS, 26.60 MiB/s [2024-12-09T06:07:14.175Z] 6850.73 IOPS, 26.76 MiB/s [2024-12-09T06:07:14.175Z] 6880.63 IOPS, 26.88 MiB/s [2024-12-09T06:07:14.175Z] 6913.95 IOPS, 27.01 MiB/s [2024-12-09T06:07:14.175Z] 6944.12 IOPS, 27.13 MiB/s [2024-12-09T06:07:14.175Z] 6974.71 IOPS, 27.24 MiB/s [2024-12-09T06:07:14.175Z] 7004.81 IOPS, 27.36 MiB/s [2024-12-09T06:07:14.175Z] 7028.21 IOPS, 27.45 MiB/s [2024-12-09T06:07:14.175Z] 7051.70 IOPS, 27.55 MiB/s [2024-12-09T06:07:14.175Z] 7075.27 IOPS, 27.64 MiB/s [2024-12-09T06:07:14.175Z] 7100.24 IOPS, 27.74 MiB/s [2024-12-09T06:07:14.175Z] [2024-12-09 06:07:03.485633] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:08:19.589 7123.68 IOPS, 27.83 MiB/s [2024-12-09T06:07:14.175Z] 7149.92 IOPS, 27.93 MiB/s [2024-12-09T06:07:14.175Z] 7176.35 IOPS, 28.03 MiB/s [2024-12-09T06:07:14.175Z] 7201.58 IOPS, 28.13 MiB/s [2024-12-09T06:07:14.175Z] 7214.51 IOPS, 28.18 MiB/s [2024-12-09T06:07:14.175Z] 7241.90 IOPS, 28.29 MiB/s [2024-12-09T06:07:14.175Z] 7264.26 IOPS, 28.38 MiB/s [2024-12-09T06:07:14.175Z] 7284.85 IOPS, 28.46 MiB/s [2024-12-09T06:07:14.175Z] 7300.75 IOPS, 28.52 MiB/s [2024-12-09T06:07:14.175Z] 7319.93 IOPS, 28.59 MiB/s [2024-12-09T06:07:14.175Z] Received shutdown signal, test time was about 56.331420 seconds 01:08:19.589 01:08:19.589 Latency(us) 01:08:19.589 [2024-12-09T06:07:14.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:19.589 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:19.589 Verification LBA range: start 0x0 length 0x4000 01:08:19.589 Nvme0n1 : 56.33 7320.35 28.60 0.00 0.00 17453.41 793.13 7076934.75 01:08:19.589 [2024-12-09T06:07:14.175Z] =================================================================================================================== 01:08:19.589 [2024-12-09T06:07:14.175Z] Total : 7320.35 28.60 0.00 0.00 17453.41 793.13 7076934.75 01:08:19.589 06:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:08:19.850 rmmod nvme_tcp 01:08:19.850 rmmod nvme_fabrics 01:08:19.850 rmmod nvme_keyring 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 94844 ']' 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 94844 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 94844 ']' 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 94844 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94844 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:08:19.850 killing process with pid 94844 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94844' 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 94844 01:08:19.850 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 94844 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:08:20.110 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 01:08:20.369 01:08:20.369 real 1m1.703s 01:08:20.369 user 2m55.746s 01:08:20.369 sys 0m13.143s 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 01:08:20.369 ************************************ 01:08:20.369 END TEST nvmf_host_multipath 01:08:20.369 ************************************ 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:08:20.369 ************************************ 01:08:20.369 START TEST nvmf_timeout 01:08:20.369 ************************************ 01:08:20.369 06:07:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 01:08:20.633 * Looking for test storage... 01:08:20.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 01:08:20.633 06:07:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:08:20.633 06:07:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:08:20.633 06:07:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:08:20.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:20.633 --rc genhtml_branch_coverage=1 01:08:20.633 --rc genhtml_function_coverage=1 01:08:20.633 --rc genhtml_legend=1 01:08:20.633 --rc geninfo_all_blocks=1 01:08:20.633 --rc geninfo_unexecuted_blocks=1 01:08:20.633 01:08:20.633 ' 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:08:20.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:20.633 --rc genhtml_branch_coverage=1 01:08:20.633 --rc genhtml_function_coverage=1 01:08:20.633 --rc genhtml_legend=1 01:08:20.633 --rc geninfo_all_blocks=1 01:08:20.633 --rc geninfo_unexecuted_blocks=1 01:08:20.633 01:08:20.633 ' 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:08:20.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:20.633 --rc genhtml_branch_coverage=1 01:08:20.633 --rc genhtml_function_coverage=1 01:08:20.633 --rc genhtml_legend=1 01:08:20.633 --rc geninfo_all_blocks=1 01:08:20.633 --rc geninfo_unexecuted_blocks=1 01:08:20.633 01:08:20.633 ' 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:08:20.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:08:20.633 --rc genhtml_branch_coverage=1 01:08:20.633 --rc genhtml_function_coverage=1 01:08:20.633 --rc genhtml_legend=1 01:08:20.633 --rc geninfo_all_blocks=1 01:08:20.633 --rc geninfo_unexecuted_blocks=1 01:08:20.633 01:08:20.633 ' 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:08:20.633 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:08:20.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:08:20.634 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:08:20.635 Cannot find device "nvmf_init_br" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:08:20.635 Cannot find device "nvmf_init_br2" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:08:20.635 Cannot find device "nvmf_tgt_br" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:08:20.635 Cannot find device "nvmf_tgt_br2" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:08:20.635 Cannot find device "nvmf_init_br" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:08:20.635 Cannot find device "nvmf_init_br2" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:08:20.635 Cannot find device "nvmf_tgt_br" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:08:20.635 Cannot find device "nvmf_tgt_br2" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:08:20.635 Cannot find device "nvmf_br" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:08:20.635 Cannot find device "nvmf_init_if" 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 01:08:20.635 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:08:20.895 Cannot find device "nvmf_init_if2" 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:08:20.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:08:20.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:08:20.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:08:20.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 01:08:20.895 01:08:20.895 --- 10.0.0.3 ping statistics --- 01:08:20.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:20.895 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:08:20.895 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:08:20.895 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 01:08:20.895 01:08:20.895 --- 10.0.0.4 ping statistics --- 01:08:20.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:20.895 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:08:20.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:08:20.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:08:20.895 01:08:20.895 --- 10.0.0.1 ping statistics --- 01:08:20.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:20.895 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:08:20.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:08:20.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 01:08:20.895 01:08:20.895 --- 10.0.0.2 ping statistics --- 01:08:20.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:08:20.895 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:08:20.895 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=96251 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 96251 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96251 ']' 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:21.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:21.154 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:21.154 [2024-12-09 06:07:15.554393] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:21.154 [2024-12-09 06:07:15.554487] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:08:21.154 [2024-12-09 06:07:15.707034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:08:21.413 [2024-12-09 06:07:15.746547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:08:21.413 [2024-12-09 06:07:15.746616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:08:21.413 [2024-12-09 06:07:15.746630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:08:21.413 [2024-12-09 06:07:15.746641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:08:21.413 [2024-12-09 06:07:15.746664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:08:21.413 [2024-12-09 06:07:15.747596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:08:21.413 [2024-12-09 06:07:15.747610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:08:21.413 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:21.413 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:08:21.413 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:08:21.413 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 01:08:21.413 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:21.413 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:08:21.413 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:08:21.413 06:07:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:08:21.672 [2024-12-09 06:07:16.220884] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:08:21.672 06:07:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:08:22.240 Malloc0 01:08:22.240 06:07:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:08:22.499 06:07:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:08:22.758 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:23.016 [2024-12-09 06:07:17.420275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:23.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96330 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96330 /var/tmp/bdevperf.sock 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96330 ']' 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:23.016 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:23.016 [2024-12-09 06:07:17.513176] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:23.016 [2024-12-09 06:07:17.513297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96330 ] 01:08:23.275 [2024-12-09 06:07:17.670052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:23.275 [2024-12-09 06:07:17.709969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:23.275 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:23.275 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:08:23.275 06:07:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:08:23.534 06:07:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:08:24.102 NVMe0n1 01:08:24.102 06:07:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96364 01:08:24.102 06:07:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:24.102 06:07:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 01:08:24.102 Running I/O for 10 seconds... 01:08:25.038 06:07:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:25.298 9049.00 IOPS, 35.35 MiB/s [2024-12-09T06:07:19.884Z] [2024-12-09 06:07:19.684476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.298 [2024-12-09 06:07:19.684544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.298 [2024-12-09 06:07:19.684596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.298 [2024-12-09 06:07:19.684617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.298 [2024-12-09 06:07:19.684639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.298 [2024-12-09 06:07:19.684671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.298 [2024-12-09 06:07:19.684693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.298 [2024-12-09 06:07:19.684714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.684735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.684755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.684775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.684796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.684816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.684836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.684847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.298 [2024-12-09 06:07:19.685323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.685984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.685994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.686005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.686017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.686028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.686037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.686048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.686057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.686068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.686077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.686088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.686097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.686466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.686492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.298 [2024-12-09 06:07:19.686507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.298 [2024-12-09 06:07:19.686517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.686528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.686537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.686549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.686558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.686569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.686578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.686589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.686598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.686609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.686618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.686629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.686756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.687851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.687860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.688810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.688945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.689079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.689094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.689323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.689335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.299 [2024-12-09 06:07:19.689347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.299 [2024-12-09 06:07:19.689356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.689368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.689376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.689387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.689523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.689678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.689785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.689800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.689810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.689822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.689831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.689851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.689860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.689871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.689880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.689891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.690786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.300 [2024-12-09 06:07:19.690799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.691069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.691092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.691104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.691115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.691127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.691136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.691147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.691271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.691422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.691560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.691709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.691842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.691859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.692003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.692101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.692113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.692124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.692133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.692146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.692155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.692402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.692425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.692440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.692449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.692461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.692470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.692482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.692611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.692896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.693037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.693185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.693287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.693301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.693310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.693322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.693332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.693342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.693614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.693640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.300 [2024-12-09 06:07:19.693665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.300 [2024-12-09 06:07:19.693677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.693686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.693698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.693707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.693825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.693837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.693990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:25.301 [2024-12-09 06:07:19.694782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.301 [2024-12-09 06:07:19.694804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.301 [2024-12-09 06:07:19.694824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.694835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.301 [2024-12-09 06:07:19.694844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.301 [2024-12-09 06:07:19.695100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.301 [2024-12-09 06:07:19.695120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.301 [2024-12-09 06:07:19.695140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:25.301 [2024-12-09 06:07:19.695160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b7400 is same with the state(6) to be set 01:08:25.301 [2024-12-09 06:07:19.695464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:25.301 [2024-12-09 06:07:19.695471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:25.301 [2024-12-09 06:07:19.695481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85200 len:8 PRP1 0x0 PRP2 0x0 01:08:25.301 [2024-12-09 06:07:19.695490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:25.301 [2024-12-09 06:07:19.695898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:25.301 [2024-12-09 06:07:19.695919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:25.301 [2024-12-09 06:07:19.695938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:25.301 [2024-12-09 06:07:19.695958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:25.301 [2024-12-09 06:07:19.695967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224bf30 is same with the state(6) to be set 01:08:25.301 [2024-12-09 06:07:19.696552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:08:25.301 [2024-12-09 06:07:19.696590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224bf30 (9): Bad file descriptor 01:08:25.301 [2024-12-09 06:07:19.696722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:25.301 [2024-12-09 06:07:19.696746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224bf30 with addr=10.0.0.3, port=4420 01:08:25.301 [2024-12-09 06:07:19.696989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224bf30 is same with the state(6) to be set 01:08:25.301 [2024-12-09 06:07:19.697026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224bf30 (9): Bad file descriptor 01:08:25.301 [2024-12-09 06:07:19.697045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:08:25.301 [2024-12-09 06:07:19.697054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:08:25.301 [2024-12-09 06:07:19.697065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:08:25.301 [2024-12-09 06:07:19.697077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:08:25.301 [2024-12-09 06:07:19.697088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:08:25.301 06:07:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 01:08:27.166 5281.50 IOPS, 20.63 MiB/s [2024-12-09T06:07:21.752Z] 3521.00 IOPS, 13.75 MiB/s [2024-12-09T06:07:21.752Z] [2024-12-09 06:07:21.697435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:27.166 [2024-12-09 06:07:21.697503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224bf30 with addr=10.0.0.3, port=4420 01:08:27.166 [2024-12-09 06:07:21.697521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224bf30 is same with the state(6) to be set 01:08:27.166 [2024-12-09 06:07:21.697547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224bf30 (9): Bad file descriptor 01:08:27.166 [2024-12-09 06:07:21.697568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:08:27.166 [2024-12-09 06:07:21.697578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:08:27.166 [2024-12-09 06:07:21.697590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:08:27.166 [2024-12-09 06:07:21.697601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:08:27.166 [2024-12-09 06:07:21.697613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:08:27.166 06:07:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 01:08:27.166 06:07:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:08:27.167 06:07:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:08:27.424 06:07:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 01:08:27.424 06:07:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 01:08:27.424 06:07:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:08:27.425 06:07:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:08:27.683 06:07:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 01:08:27.683 06:07:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 01:08:29.336 2640.75 IOPS, 10.32 MiB/s [2024-12-09T06:07:23.922Z] 2112.60 IOPS, 8.25 MiB/s [2024-12-09T06:07:23.922Z] [2024-12-09 06:07:23.697738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:29.336 [2024-12-09 06:07:23.697831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224bf30 with addr=10.0.0.3, port=4420 01:08:29.336 [2024-12-09 06:07:23.697847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224bf30 is same with the state(6) to be set 01:08:29.336 [2024-12-09 06:07:23.697873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224bf30 (9): Bad file descriptor 01:08:29.336 [2024-12-09 06:07:23.697891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:08:29.336 [2024-12-09 06:07:23.697901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:08:29.336 [2024-12-09 06:07:23.697911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:08:29.336 [2024-12-09 06:07:23.697922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:08:29.336 [2024-12-09 06:07:23.697932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:08:31.207 1760.50 IOPS, 6.88 MiB/s [2024-12-09T06:07:25.793Z] 1509.00 IOPS, 5.89 MiB/s [2024-12-09T06:07:25.793Z] [2024-12-09 06:07:25.697964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 01:08:31.207 [2024-12-09 06:07:25.698054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 01:08:31.207 [2024-12-09 06:07:25.698084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 01:08:31.207 [2024-12-09 06:07:25.698094] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 01:08:31.207 [2024-12-09 06:07:25.698110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 01:08:32.142 1320.38 IOPS, 5.16 MiB/s 01:08:32.142 Latency(us) 01:08:32.142 [2024-12-09T06:07:26.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:32.142 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:32.142 Verification LBA range: start 0x0 length 0x4000 01:08:32.142 NVMe0n1 : 8.18 1290.78 5.04 15.64 0.00 98003.05 2070.34 7046430.72 01:08:32.142 [2024-12-09T06:07:26.728Z] =================================================================================================================== 01:08:32.142 [2024-12-09T06:07:26.728Z] Total : 1290.78 5.04 15.64 0.00 98003.05 2070.34 7046430.72 01:08:32.142 { 01:08:32.142 "results": [ 01:08:32.142 { 01:08:32.142 "job": "NVMe0n1", 01:08:32.142 "core_mask": "0x4", 01:08:32.142 "workload": "verify", 01:08:32.142 "status": "finished", 01:08:32.142 "verify_range": { 01:08:32.142 "start": 0, 01:08:32.142 "length": 16384 01:08:32.142 }, 01:08:32.142 "queue_depth": 128, 01:08:32.142 "io_size": 4096, 01:08:32.142 "runtime": 8.183427, 01:08:32.142 "iops": 1290.779523053117, 01:08:32.142 "mibps": 5.042107511926238, 01:08:32.142 "io_failed": 128, 01:08:32.142 "io_timeout": 0, 01:08:32.142 "avg_latency_us": 98003.0489591075, 01:08:32.142 "min_latency_us": 2070.3418181818183, 01:08:32.142 "max_latency_us": 7046430.72 01:08:32.142 } 01:08:32.142 ], 01:08:32.142 "core_count": 1 01:08:32.142 } 01:08:32.709 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 01:08:32.709 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 01:08:32.709 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 01:08:33.275 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 01:08:33.275 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 01:08:33.275 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 01:08:33.275 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96364 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96330 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96330 ']' 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96330 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96330 01:08:33.534 killing process with pid 96330 01:08:33.534 Received shutdown signal, test time was about 9.391595 seconds 01:08:33.534 01:08:33.534 Latency(us) 01:08:33.534 [2024-12-09T06:07:28.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:33.534 [2024-12-09T06:07:28.120Z] =================================================================================================================== 01:08:33.534 [2024-12-09T06:07:28.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96330' 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96330 01:08:33.534 06:07:27 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96330 01:08:33.534 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:33.792 [2024-12-09 06:07:28.292536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:33.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96523 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96523 /var/tmp/bdevperf.sock 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96523 ']' 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:33.792 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:34.049 [2024-12-09 06:07:28.386628] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:34.049 [2024-12-09 06:07:28.386773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96523 ] 01:08:34.049 [2024-12-09 06:07:28.540017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:34.049 [2024-12-09 06:07:28.580934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:34.307 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:34.307 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:08:34.307 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 01:08:34.566 06:07:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 01:08:34.826 NVMe0n1 01:08:34.826 06:07:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96557 01:08:34.826 06:07:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 01:08:34.826 06:07:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:35.084 Running I/O for 10 seconds... 01:08:36.021 06:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:36.021 9390.00 IOPS, 36.68 MiB/s [2024-12-09T06:07:30.607Z] [2024-12-09 06:07:30.600174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.600344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152d6b0 is same with the state(6) to be set 01:08:36.021 [2024-12-09 06:07:30.601295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.601837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.601849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.021 [2024-12-09 06:07:30.602615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.021 [2024-12-09 06:07:30.602953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.602980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.602993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.603004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.603026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.022 [2024-12-09 06:07:30.603047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.022 [2024-12-09 06:07:30.603068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.022 [2024-12-09 06:07:30.603328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.022 [2024-12-09 06:07:30.603350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.022 [2024-12-09 06:07:30.603372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.022 [2024-12-09 06:07:30.603393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.022 [2024-12-09 06:07:30.603413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.603676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.603699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.603721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.603741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.603752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.603761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.604765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.604991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.605025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.605047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.605068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.605089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.605345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.605367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.605389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.022 [2024-12-09 06:07:30.605409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.022 [2024-12-09 06:07:30.605418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.605578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.605660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.605675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.605685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.605696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.605706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.605718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.605727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.605738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.605997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.606752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.606764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.283 [2024-12-09 06:07:30.607183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.283 [2024-12-09 06:07:30.607850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.283 [2024-12-09 06:07:30.607981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.607998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.608933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.608942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.609903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.609912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:36.284 [2024-12-09 06:07:30.610887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.610898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.284 [2024-12-09 06:07:30.611040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.611167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.284 [2024-12-09 06:07:30.611195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.611476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.284 [2024-12-09 06:07:30.611624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.611840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.284 [2024-12-09 06:07:30.611862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.611876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.284 [2024-12-09 06:07:30.611885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.611897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.284 [2024-12-09 06:07:30.611906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.611917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.284 [2024-12-09 06:07:30.611926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.611937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.284 [2024-12-09 06:07:30.612279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.284 [2024-12-09 06:07:30.612302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.285 [2024-12-09 06:07:30.612313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.612325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.285 [2024-12-09 06:07:30.612335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.612346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.285 [2024-12-09 06:07:30.612355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.612366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.285 [2024-12-09 06:07:30.612376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.612387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.285 [2024-12-09 06:07:30.612521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.612537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:36.285 [2024-12-09 06:07:30.612546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.612791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:36.285 [2024-12-09 06:07:30.612816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:36.285 [2024-12-09 06:07:30.612826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88632 len:8 PRP1 0x0 PRP2 0x0 01:08:36.285 [2024-12-09 06:07:30.612836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.613201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:36.285 [2024-12-09 06:07:30.613230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.613243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:36.285 [2024-12-09 06:07:30.613253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.613263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:36.285 [2024-12-09 06:07:30.613272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.613282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:36.285 [2024-12-09 06:07:30.613291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:36.285 [2024-12-09 06:07:30.613301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94f30 is same with the state(6) to be set 01:08:36.285 [2024-12-09 06:07:30.613803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:08:36.285 [2024-12-09 06:07:30.613840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94f30 (9): Bad file descriptor 01:08:36.285 [2024-12-09 06:07:30.614140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:36.285 [2024-12-09 06:07:30.614175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb94f30 with addr=10.0.0.3, port=4420 01:08:36.285 [2024-12-09 06:07:30.614188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94f30 is same with the state(6) to be set 01:08:36.285 [2024-12-09 06:07:30.614208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94f30 (9): Bad file descriptor 01:08:36.285 [2024-12-09 06:07:30.614224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:08:36.285 [2024-12-09 06:07:30.614234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:08:36.285 [2024-12-09 06:07:30.614245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:08:36.285 [2024-12-09 06:07:30.614255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:08:36.285 [2024-12-09 06:07:30.614544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:08:36.285 06:07:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 01:08:37.145 5476.00 IOPS, 21.39 MiB/s [2024-12-09T06:07:31.731Z] [2024-12-09 06:07:31.614940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:37.145 [2024-12-09 06:07:31.615020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb94f30 with addr=10.0.0.3, port=4420 01:08:37.145 [2024-12-09 06:07:31.615038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94f30 is same with the state(6) to be set 01:08:37.145 [2024-12-09 06:07:31.615066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94f30 (9): Bad file descriptor 01:08:37.145 [2024-12-09 06:07:31.615086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 01:08:37.145 [2024-12-09 06:07:31.615097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 01:08:37.145 [2024-12-09 06:07:31.615108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 01:08:37.145 [2024-12-09 06:07:31.615120] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 01:08:37.145 [2024-12-09 06:07:31.615131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 01:08:37.145 06:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:37.404 [2024-12-09 06:07:31.890296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:37.404 06:07:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96557 01:08:38.228 3650.67 IOPS, 14.26 MiB/s [2024-12-09T06:07:32.814Z] [2024-12-09 06:07:32.633958] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 01:08:40.102 2738.00 IOPS, 10.70 MiB/s [2024-12-09T06:07:35.625Z] 3760.20 IOPS, 14.69 MiB/s [2024-12-09T06:07:36.560Z] 4729.83 IOPS, 18.48 MiB/s [2024-12-09T06:07:37.519Z] 5437.00 IOPS, 21.24 MiB/s [2024-12-09T06:07:38.454Z] 5956.12 IOPS, 23.27 MiB/s [2024-12-09T06:07:39.831Z] 6393.33 IOPS, 24.97 MiB/s [2024-12-09T06:07:39.831Z] 6724.20 IOPS, 26.27 MiB/s 01:08:45.245 Latency(us) 01:08:45.245 [2024-12-09T06:07:39.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:45.245 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:45.245 Verification LBA range: start 0x0 length 0x4000 01:08:45.245 NVMe0n1 : 10.01 6732.16 26.30 0.00 0.00 18989.87 1966.08 3035150.89 01:08:45.245 [2024-12-09T06:07:39.831Z] =================================================================================================================== 01:08:45.245 [2024-12-09T06:07:39.831Z] Total : 6732.16 26.30 0.00 0.00 18989.87 1966.08 3035150.89 01:08:45.245 { 01:08:45.245 "results": [ 01:08:45.245 { 01:08:45.245 "job": "NVMe0n1", 01:08:45.245 "core_mask": "0x4", 01:08:45.245 "workload": "verify", 01:08:45.245 "status": "finished", 01:08:45.245 "verify_range": { 01:08:45.245 "start": 0, 01:08:45.245 "length": 16384 01:08:45.245 }, 01:08:45.245 "queue_depth": 128, 01:08:45.245 "io_size": 4096, 01:08:45.245 "runtime": 10.007187, 01:08:45.245 "iops": 6732.161595461342, 01:08:45.245 "mibps": 26.297506232270866, 01:08:45.245 "io_failed": 0, 01:08:45.245 "io_timeout": 0, 01:08:45.245 "avg_latency_us": 18989.87224850554, 01:08:45.245 "min_latency_us": 1966.08, 01:08:45.245 "max_latency_us": 3035150.8945454545 01:08:45.245 } 01:08:45.245 ], 01:08:45.245 "core_count": 1 01:08:45.245 } 01:08:45.245 06:07:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96674 01:08:45.245 06:07:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:45.245 06:07:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 01:08:45.245 Running I/O for 10 seconds... 01:08:46.181 06:07:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:46.181 9264.00 IOPS, 36.19 MiB/s [2024-12-09T06:07:40.767Z] [2024-12-09 06:07:40.752032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.752988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.181 [2024-12-09 06:07:40.753107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.753190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152bd40 is same with the state(6) to be set 01:08:46.182 [2024-12-09 06:07:40.754429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.754894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.754905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.755767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.755777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.756930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.756939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.757073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.757492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.757802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.757823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.757844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.757865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.757885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.757896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.758856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.758865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.759100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.759126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.759139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.759149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.759161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.759170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.759181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.759200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.759211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.759337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.759453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.759466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.182 [2024-12-09 06:07:40.759478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.182 [2024-12-09 06:07:40.759488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.759499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.759508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.759520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.759529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.759772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.759786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.759797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.759807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.759819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.759828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.759840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.760183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.760211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.760913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.760922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.761975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.761990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.762000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.762022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.762042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.762195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.762322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.762343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.762365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 01:08:46.183 [2024-12-09 06:07:40.762386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.762800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.762828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.762849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.762860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.762869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.763900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.763910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.764297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.764317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.764563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.183 [2024-12-09 06:07:40.764584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.183 [2024-12-09 06:07:40.764598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.441 [2024-12-09 06:07:40.764608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.441 [2024-12-09 06:07:40.764620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.441 [2024-12-09 06:07:40.764629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.441 [2024-12-09 06:07:40.764641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.441 [2024-12-09 06:07:40.764664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.441 [2024-12-09 06:07:40.764676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.441 [2024-12-09 06:07:40.764687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.441 [2024-12-09 06:07:40.764698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.441 [2024-12-09 06:07:40.764708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.441 [2024-12-09 06:07:40.764719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.441 [2024-12-09 06:07:40.764728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.441 [2024-12-09 06:07:40.765078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.441 [2024-12-09 06:07:40.765099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.442 [2024-12-09 06:07:40.765111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:46.442 [2024-12-09 06:07:40.765121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.442 [2024-12-09 06:07:40.765155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:46.442 [2024-12-09 06:07:40.765166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:46.442 [2024-12-09 06:07:40.765175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85552 len:8 PRP1 0x0 PRP2 0x0 01:08:46.442 [2024-12-09 06:07:40.765185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.442 [2024-12-09 06:07:40.765519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:46.442 [2024-12-09 06:07:40.765544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.442 [2024-12-09 06:07:40.765556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:46.442 [2024-12-09 06:07:40.765566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.442 [2024-12-09 06:07:40.765576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:46.442 [2024-12-09 06:07:40.765585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.442 [2024-12-09 06:07:40.765595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:46.442 [2024-12-09 06:07:40.765604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:46.442 [2024-12-09 06:07:40.765613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94f30 is same with the state(6) to be set 01:08:46.442 [2024-12-09 06:07:40.766171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:08:46.442 [2024-12-09 06:07:40.766210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94f30 (9): Bad file descriptor 01:08:46.442 [2024-12-09 06:07:40.766321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:46.442 [2024-12-09 06:07:40.766435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb94f30 with addr=10.0.0.3, port=4420 01:08:46.442 [2024-12-09 06:07:40.766451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94f30 is same with the state(6) to be set 01:08:46.442 [2024-12-09 06:07:40.766470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94f30 (9): Bad file descriptor 01:08:46.442 [2024-12-09 06:07:40.766612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:08:46.442 [2024-12-09 06:07:40.766782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:08:46.442 [2024-12-09 06:07:40.766797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:08:46.442 [2024-12-09 06:07:40.766809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:08:46.442 [2024-12-09 06:07:40.766821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:08:46.442 06:07:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 01:08:47.375 5297.50 IOPS, 20.69 MiB/s [2024-12-09T06:07:41.961Z] [2024-12-09 06:07:41.766977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:47.375 [2024-12-09 06:07:41.767069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb94f30 with addr=10.0.0.3, port=4420 01:08:47.375 [2024-12-09 06:07:41.767087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94f30 is same with the state(6) to be set 01:08:47.375 [2024-12-09 06:07:41.767115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94f30 (9): Bad file descriptor 01:08:47.375 [2024-12-09 06:07:41.767135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:08:47.375 [2024-12-09 06:07:41.767145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:08:47.375 [2024-12-09 06:07:41.767156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:08:47.375 [2024-12-09 06:07:41.767167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:08:47.375 [2024-12-09 06:07:41.767179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:08:48.311 3531.67 IOPS, 13.80 MiB/s [2024-12-09T06:07:42.897Z] [2024-12-09 06:07:42.767314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:48.311 [2024-12-09 06:07:42.767400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb94f30 with addr=10.0.0.3, port=4420 01:08:48.311 [2024-12-09 06:07:42.767416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94f30 is same with the state(6) to be set 01:08:48.311 [2024-12-09 06:07:42.767438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94f30 (9): Bad file descriptor 01:08:48.311 [2024-12-09 06:07:42.767456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:08:48.311 [2024-12-09 06:07:42.767466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:08:48.311 [2024-12-09 06:07:42.767476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:08:48.311 [2024-12-09 06:07:42.767487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:08:48.311 [2024-12-09 06:07:42.767497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:08:49.246 2648.75 IOPS, 10.35 MiB/s [2024-12-09T06:07:43.832Z] [2024-12-09 06:07:43.770438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:49.246 [2024-12-09 06:07:43.770524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb94f30 with addr=10.0.0.3, port=4420 01:08:49.246 [2024-12-09 06:07:43.770539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb94f30 is same with the state(6) to be set 01:08:49.246 [2024-12-09 06:07:43.771056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb94f30 (9): Bad file descriptor 01:08:49.246 [2024-12-09 06:07:43.771532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 01:08:49.246 [2024-12-09 06:07:43.771559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 01:08:49.246 [2024-12-09 06:07:43.771573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 01:08:49.246 [2024-12-09 06:07:43.771584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 01:08:49.246 [2024-12-09 06:07:43.771597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 01:08:49.246 06:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:49.503 [2024-12-09 06:07:44.055737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:08:49.503 06:07:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96674 01:08:50.328 2119.00 IOPS, 8.28 MiB/s [2024-12-09T06:07:44.914Z] [2024-12-09 06:07:44.799450] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 01:08:52.201 3037.17 IOPS, 11.86 MiB/s [2024-12-09T06:07:47.724Z] 3968.71 IOPS, 15.50 MiB/s [2024-12-09T06:07:48.662Z] 4648.50 IOPS, 18.16 MiB/s [2024-12-09T06:07:50.040Z] 5189.00 IOPS, 20.27 MiB/s [2024-12-09T06:07:50.040Z] 5619.30 IOPS, 21.95 MiB/s 01:08:55.454 Latency(us) 01:08:55.454 [2024-12-09T06:07:50.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:55.454 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 01:08:55.454 Verification LBA range: start 0x0 length 0x4000 01:08:55.454 NVMe0n1 : 10.01 5623.64 21.97 3638.28 0.00 13787.05 722.39 3035150.89 01:08:55.454 [2024-12-09T06:07:50.040Z] =================================================================================================================== 01:08:55.454 [2024-12-09T06:07:50.040Z] Total : 5623.64 21.97 3638.28 0.00 13787.05 0.00 3035150.89 01:08:55.454 { 01:08:55.454 "results": [ 01:08:55.454 { 01:08:55.454 "job": "NVMe0n1", 01:08:55.454 "core_mask": "0x4", 01:08:55.454 "workload": "verify", 01:08:55.454 "status": "finished", 01:08:55.454 "verify_range": { 01:08:55.454 "start": 0, 01:08:55.454 "length": 16384 01:08:55.454 }, 01:08:55.454 "queue_depth": 128, 01:08:55.454 "io_size": 4096, 01:08:55.454 "runtime": 10.010768, 01:08:55.454 "iops": 5623.644459645853, 01:08:55.454 "mibps": 21.967361170491614, 01:08:55.454 "io_failed": 36422, 01:08:55.454 "io_timeout": 0, 01:08:55.454 "avg_latency_us": 13787.046566703502, 01:08:55.454 "min_latency_us": 722.3854545454545, 01:08:55.454 "max_latency_us": 3035150.8945454545 01:08:55.454 } 01:08:55.454 ], 01:08:55.454 "core_count": 1 01:08:55.454 } 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96523 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96523 ']' 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96523 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96523 01:08:55.454 killing process with pid 96523 01:08:55.454 Received shutdown signal, test time was about 10.000000 seconds 01:08:55.454 01:08:55.454 Latency(us) 01:08:55.454 [2024-12-09T06:07:50.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:08:55.454 [2024-12-09T06:07:50.040Z] =================================================================================================================== 01:08:55.454 [2024-12-09T06:07:50.040Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96523' 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96523 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96523 01:08:55.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96800 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96800 /var/tmp/bdevperf.sock 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96800 ']' 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 01:08:55.454 06:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:08:55.454 [2024-12-09 06:07:49.877278] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:08:55.454 [2024-12-09 06:07:49.877577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96800 ] 01:08:55.454 [2024-12-09 06:07:50.027733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:08:55.713 [2024-12-09 06:07:50.062859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:08:55.713 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:08:55.713 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 01:08:55.713 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96809 01:08:55.713 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 01:08:55.713 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96800 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 01:08:56.006 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 01:08:56.265 NVMe0n1 01:08:56.265 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:08:56.265 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96869 01:08:56.265 06:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 01:08:56.524 Running I/O for 10 seconds... 01:08:57.460 06:07:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:08:57.721 18882.00 IOPS, 73.76 MiB/s [2024-12-09T06:07:52.307Z] [2024-12-09 06:07:52.104284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.721 [2024-12-09 06:07:52.104645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.104911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152f250 is same with the state(6) to be set 01:08:57.722 [2024-12-09 06:07:52.105625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.105919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.105930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.106893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.106902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.107202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.107216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.107228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.722 [2024-12-09 06:07:52.107238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.722 [2024-12-09 06:07:52.107252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.107261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.107273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.107282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.107293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.107302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.107313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.107411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.107425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.107434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.107446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.107456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.107842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.107864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.107877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.107888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.108985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.723 [2024-12-09 06:07:52.108996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.723 [2024-12-09 06:07:52.109005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.724 [2024-12-09 06:07:52.109827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.724 [2024-12-09 06:07:52.109838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.109847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.109858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.109867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.109878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.109889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.109901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.109920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.109933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.109943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.109954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.109964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.109975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.109983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.109995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:08:57.725 [2024-12-09 06:07:52.110295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ec400 is same with the state(6) to be set 01:08:57.725 [2024-12-09 06:07:52.110318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 01:08:57.725 [2024-12-09 06:07:52.110326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 01:08:57.725 [2024-12-09 06:07:52.110334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 01:08:57.725 [2024-12-09 06:07:52.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:08:57.725 [2024-12-09 06:07:52.110498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:08:57.725 [2024-12-09 06:07:52.110518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:08:57.725 [2024-12-09 06:07:52.110537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:08:57.725 [2024-12-09 06:07:52.110555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:08:57.725 [2024-12-09 06:07:52.110563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980f30 is same with the state(6) to be set 01:08:57.725 [2024-12-09 06:07:52.110847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:08:57.725 [2024-12-09 06:07:52.110875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x980f30 (9): Bad file descriptor 01:08:57.725 [2024-12-09 06:07:52.110982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:57.725 [2024-12-09 06:07:52.111007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x980f30 with addr=10.0.0.3, port=4420 01:08:57.725 [2024-12-09 06:07:52.111019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980f30 is same with the state(6) to be set 01:08:57.725 [2024-12-09 06:07:52.111037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x980f30 (9): Bad file descriptor 01:08:57.725 [2024-12-09 06:07:52.111054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:08:57.725 [2024-12-09 06:07:52.111064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:08:57.725 [2024-12-09 06:07:52.111074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:08:57.725 [2024-12-09 06:07:52.111084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:08:57.725 [2024-12-09 06:07:52.111095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:08:57.725 06:07:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96869 01:08:59.592 10741.50 IOPS, 41.96 MiB/s [2024-12-09T06:07:54.178Z] 7161.00 IOPS, 27.97 MiB/s [2024-12-09T06:07:54.178Z] [2024-12-09 06:07:54.125263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:08:59.592 [2024-12-09 06:07:54.125328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x980f30 with addr=10.0.0.3, port=4420 01:08:59.592 [2024-12-09 06:07:54.125347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980f30 is same with the state(6) to be set 01:08:59.592 [2024-12-09 06:07:54.125373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x980f30 (9): Bad file descriptor 01:08:59.592 [2024-12-09 06:07:54.125394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:08:59.592 [2024-12-09 06:07:54.125404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:08:59.592 [2024-12-09 06:07:54.125415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:08:59.592 [2024-12-09 06:07:54.125427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:08:59.592 [2024-12-09 06:07:54.125438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:09:01.459 5370.75 IOPS, 20.98 MiB/s [2024-12-09T06:07:56.303Z] 4296.60 IOPS, 16.78 MiB/s [2024-12-09T06:07:56.303Z] [2024-12-09 06:07:56.125645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 01:09:01.717 [2024-12-09 06:07:56.125723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x980f30 with addr=10.0.0.3, port=4420 01:09:01.717 [2024-12-09 06:07:56.125739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x980f30 is same with the state(6) to be set 01:09:01.717 [2024-12-09 06:07:56.125783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x980f30 (9): Bad file descriptor 01:09:01.717 [2024-12-09 06:07:56.125810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:09:01.717 [2024-12-09 06:07:56.125820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:09:01.717 [2024-12-09 06:07:56.125831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:09:01.717 [2024-12-09 06:07:56.125842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:09:01.717 [2024-12-09 06:07:56.125852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 01:09:03.588 3580.50 IOPS, 13.99 MiB/s [2024-12-09T06:07:58.174Z] 3069.00 IOPS, 11.99 MiB/s [2024-12-09T06:07:58.174Z] [2024-12-09 06:07:58.125921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 01:09:03.588 [2024-12-09 06:07:58.125975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 01:09:03.588 [2024-12-09 06:07:58.125987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 01:09:03.589 [2024-12-09 06:07:58.125996] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 01:09:03.589 [2024-12-09 06:07:58.126007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 01:09:04.785 2685.38 IOPS, 10.49 MiB/s 01:09:04.785 Latency(us) 01:09:04.785 [2024-12-09T06:07:59.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:04.785 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 01:09:04.785 NVMe0n1 : 8.15 2634.93 10.29 15.70 0.00 48222.75 3142.75 7015926.69 01:09:04.785 [2024-12-09T06:07:59.371Z] =================================================================================================================== 01:09:04.785 [2024-12-09T06:07:59.371Z] Total : 2634.93 10.29 15.70 0.00 48222.75 3142.75 7015926.69 01:09:04.785 { 01:09:04.785 "results": [ 01:09:04.785 { 01:09:04.785 "job": "NVMe0n1", 01:09:04.785 "core_mask": "0x4", 01:09:04.785 "workload": "randread", 01:09:04.785 "status": "finished", 01:09:04.785 "queue_depth": 128, 01:09:04.785 "io_size": 4096, 01:09:04.785 "runtime": 8.15316, 01:09:04.785 "iops": 2634.929278954418, 01:09:04.785 "mibps": 10.292692495915695, 01:09:04.785 "io_failed": 128, 01:09:04.785 "io_timeout": 0, 01:09:04.785 "avg_latency_us": 48222.75190832952, 01:09:04.785 "min_latency_us": 3142.7490909090907, 01:09:04.785 "max_latency_us": 7015926.69090909 01:09:04.785 } 01:09:04.785 ], 01:09:04.785 "core_count": 1 01:09:04.785 } 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:09:04.785 Attaching 5 probes... 01:09:04.785 1440.864470: reset bdev controller NVMe0 01:09:04.785 1440.935475: reconnect bdev controller NVMe0 01:09:04.785 3455.110127: reconnect delay bdev controller NVMe0 01:09:04.785 3455.163414: reconnect bdev controller NVMe0 01:09:04.785 5455.508086: reconnect delay bdev controller NVMe0 01:09:04.785 5455.560732: reconnect bdev controller NVMe0 01:09:04.785 7455.904698: reconnect delay bdev controller NVMe0 01:09:04.785 7455.922412: reconnect bdev controller NVMe0 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96809 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96800 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96800 ']' 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96800 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96800 01:09:04.785 killing process with pid 96800 01:09:04.785 Received shutdown signal, test time was about 8.217658 seconds 01:09:04.785 01:09:04.785 Latency(us) 01:09:04.785 [2024-12-09T06:07:59.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:09:04.785 [2024-12-09T06:07:59.371Z] =================================================================================================================== 01:09:04.785 [2024-12-09T06:07:59.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96800' 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96800 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96800 01:09:04.785 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:05.044 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 01:09:05.044 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 01:09:05.044 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:05.044 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:05.303 rmmod nvme_tcp 01:09:05.303 rmmod nvme_fabrics 01:09:05.303 rmmod nvme_keyring 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 96251 ']' 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 96251 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96251 ']' 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96251 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96251 01:09:05.303 killing process with pid 96251 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96251' 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96251 01:09:05.303 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96251 01:09:05.562 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:05.562 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:09:05.563 06:07:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:09:05.563 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:09:05.563 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:09:05.563 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:05.563 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:05.563 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 01:09:05.563 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:05.563 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:05.563 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:05.822 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 01:09:05.822 01:09:05.822 real 0m45.278s 01:09:05.822 user 2m13.281s 01:09:05.822 sys 0m4.718s 01:09:05.822 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:05.822 06:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 01:09:05.822 ************************************ 01:09:05.822 END TEST nvmf_timeout 01:09:05.822 ************************************ 01:09:05.822 06:08:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 01:09:05.822 06:08:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:09:05.822 01:09:05.822 real 5m36.796s 01:09:05.822 user 14m33.747s 01:09:05.822 sys 1m0.910s 01:09:05.822 06:08:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:05.822 06:08:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 01:09:05.822 ************************************ 01:09:05.822 END TEST nvmf_host 01:09:05.822 ************************************ 01:09:05.822 06:08:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 01:09:05.822 06:08:00 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 01:09:05.822 06:08:00 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 01:09:05.822 06:08:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:09:05.822 06:08:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:05.822 06:08:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:09:05.822 ************************************ 01:09:05.822 START TEST nvmf_target_core_interrupt_mode 01:09:05.822 ************************************ 01:09:05.822 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 01:09:05.822 * Looking for test storage... 01:09:05.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 01:09:05.822 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:05.822 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 01:09:05.822 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:06.083 --rc genhtml_branch_coverage=1 01:09:06.083 --rc genhtml_function_coverage=1 01:09:06.083 --rc genhtml_legend=1 01:09:06.083 --rc geninfo_all_blocks=1 01:09:06.083 --rc geninfo_unexecuted_blocks=1 01:09:06.083 01:09:06.083 ' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:06.083 --rc genhtml_branch_coverage=1 01:09:06.083 --rc genhtml_function_coverage=1 01:09:06.083 --rc genhtml_legend=1 01:09:06.083 --rc geninfo_all_blocks=1 01:09:06.083 --rc geninfo_unexecuted_blocks=1 01:09:06.083 01:09:06.083 ' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:06.083 --rc genhtml_branch_coverage=1 01:09:06.083 --rc genhtml_function_coverage=1 01:09:06.083 --rc genhtml_legend=1 01:09:06.083 --rc geninfo_all_blocks=1 01:09:06.083 --rc geninfo_unexecuted_blocks=1 01:09:06.083 01:09:06.083 ' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:06.083 --rc genhtml_branch_coverage=1 01:09:06.083 --rc genhtml_function_coverage=1 01:09:06.083 --rc genhtml_legend=1 01:09:06.083 --rc geninfo_all_blocks=1 01:09:06.083 --rc geninfo_unexecuted_blocks=1 01:09:06.083 01:09:06.083 ' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:06.083 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:09:06.084 ************************************ 01:09:06.084 START TEST nvmf_abort 01:09:06.084 ************************************ 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 01:09:06.084 * Looking for test storage... 01:09:06.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 01:09:06.084 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:06.343 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:06.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:06.344 --rc genhtml_branch_coverage=1 01:09:06.344 --rc genhtml_function_coverage=1 01:09:06.344 --rc genhtml_legend=1 01:09:06.344 --rc geninfo_all_blocks=1 01:09:06.344 --rc geninfo_unexecuted_blocks=1 01:09:06.344 01:09:06.344 ' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:06.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:06.344 --rc genhtml_branch_coverage=1 01:09:06.344 --rc genhtml_function_coverage=1 01:09:06.344 --rc genhtml_legend=1 01:09:06.344 --rc geninfo_all_blocks=1 01:09:06.344 --rc geninfo_unexecuted_blocks=1 01:09:06.344 01:09:06.344 ' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:06.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:06.344 --rc genhtml_branch_coverage=1 01:09:06.344 --rc genhtml_function_coverage=1 01:09:06.344 --rc genhtml_legend=1 01:09:06.344 --rc geninfo_all_blocks=1 01:09:06.344 --rc geninfo_unexecuted_blocks=1 01:09:06.344 01:09:06.344 ' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:06.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:06.344 --rc genhtml_branch_coverage=1 01:09:06.344 --rc genhtml_function_coverage=1 01:09:06.344 --rc genhtml_legend=1 01:09:06.344 --rc geninfo_all_blocks=1 01:09:06.344 --rc geninfo_unexecuted_blocks=1 01:09:06.344 01:09:06.344 ' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:09:06.344 Cannot find device "nvmf_init_br" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:09:06.344 Cannot find device "nvmf_init_br2" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:09:06.344 Cannot find device "nvmf_tgt_br" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:09:06.344 Cannot find device "nvmf_tgt_br2" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:09:06.344 Cannot find device "nvmf_init_br" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:09:06.344 Cannot find device "nvmf_init_br2" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:09:06.344 Cannot find device "nvmf_tgt_br" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:09:06.344 Cannot find device "nvmf_tgt_br2" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:09:06.344 Cannot find device "nvmf_br" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:09:06.344 Cannot find device "nvmf_init_if" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:09:06.344 Cannot find device "nvmf_init_if2" 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:06.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:06.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:09:06.344 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:09:06.603 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:09:06.604 06:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:09:06.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:06.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.323 ms 01:09:06.604 01:09:06.604 --- 10.0.0.3 ping statistics --- 01:09:06.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:06.604 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:09:06.604 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:09:06.604 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 01:09:06.604 01:09:06.604 --- 10.0.0.4 ping statistics --- 01:09:06.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:06.604 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:06.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:06.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 01:09:06.604 01:09:06.604 --- 10.0.0.1 ping statistics --- 01:09:06.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:06.604 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:09:06.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:06.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 01:09:06.604 01:09:06.604 --- 10.0.0.2 ping statistics --- 01:09:06.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:06.604 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:06.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=97290 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 97290 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 97290 ']' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:06.604 06:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:06.604 [2024-12-09 06:08:01.153364] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:09:06.604 [2024-12-09 06:08:01.154848] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:06.604 [2024-12-09 06:08:01.155078] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:06.863 [2024-12-09 06:08:01.308310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:09:06.863 [2024-12-09 06:08:01.347748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:06.863 [2024-12-09 06:08:01.348029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:06.863 [2024-12-09 06:08:01.348240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:06.863 [2024-12-09 06:08:01.348390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:06.863 [2024-12-09 06:08:01.348432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:06.863 [2024-12-09 06:08:01.349411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:09:06.863 [2024-12-09 06:08:01.349525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:09:06.863 [2024-12-09 06:08:01.349532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:06.863 [2024-12-09 06:08:01.406060] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:09:06.863 [2024-12-09 06:08:01.406769] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:09:06.863 [2024-12-09 06:08:01.406771] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:09:06.863 [2024-12-09 06:08:01.406845] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:07.798 [2024-12-09 06:08:02.215547] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:07.798 Malloc0 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:07.798 Delay0 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:07.798 [2024-12-09 06:08:02.287531] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:07.798 06:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 01:09:08.056 [2024-12-09 06:08:02.470115] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:09:09.951 Initializing NVMe Controllers 01:09:09.951 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:09:09.951 controller IO queue size 128 less than required 01:09:09.951 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 01:09:09.951 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 01:09:09.951 Initialization complete. Launching workers. 01:09:09.951 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26839 01:09:09.951 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26896, failed to submit 66 01:09:09.951 success 26839, unsuccessful 57, failed 0 01:09:09.951 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:09:09.951 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:09.951 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:09.951 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:09.951 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 01:09:09.951 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 01:09:09.951 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:09.951 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:10.209 rmmod nvme_tcp 01:09:10.209 rmmod nvme_fabrics 01:09:10.209 rmmod nvme_keyring 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 97290 ']' 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 97290 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 97290 ']' 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 97290 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97290 01:09:10.209 killing process with pid 97290 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97290' 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 97290 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 97290 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:10.209 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:09:10.467 06:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:10.467 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:10.467 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 01:09:10.467 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:10.467 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:10.467 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 01:09:10.735 01:09:10.735 real 0m4.581s 01:09:10.735 user 0m8.993s 01:09:10.735 sys 0m1.526s 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:10.735 ************************************ 01:09:10.735 END TEST nvmf_abort 01:09:10.735 ************************************ 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:09:10.735 ************************************ 01:09:10.735 START TEST nvmf_ns_hotplug_stress 01:09:10.735 ************************************ 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 01:09:10.735 * Looking for test storage... 01:09:10.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:10.735 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:10.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:10.736 --rc genhtml_branch_coverage=1 01:09:10.736 --rc genhtml_function_coverage=1 01:09:10.736 --rc genhtml_legend=1 01:09:10.736 --rc geninfo_all_blocks=1 01:09:10.736 --rc geninfo_unexecuted_blocks=1 01:09:10.736 01:09:10.736 ' 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:10.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:10.736 --rc genhtml_branch_coverage=1 01:09:10.736 --rc genhtml_function_coverage=1 01:09:10.736 --rc genhtml_legend=1 01:09:10.736 --rc geninfo_all_blocks=1 01:09:10.736 --rc geninfo_unexecuted_blocks=1 01:09:10.736 01:09:10.736 ' 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:10.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:10.736 --rc genhtml_branch_coverage=1 01:09:10.736 --rc genhtml_function_coverage=1 01:09:10.736 --rc genhtml_legend=1 01:09:10.736 --rc geninfo_all_blocks=1 01:09:10.736 --rc geninfo_unexecuted_blocks=1 01:09:10.736 01:09:10.736 ' 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:10.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:10.736 --rc genhtml_branch_coverage=1 01:09:10.736 --rc genhtml_function_coverage=1 01:09:10.736 --rc genhtml_legend=1 01:09:10.736 --rc geninfo_all_blocks=1 01:09:10.736 --rc geninfo_unexecuted_blocks=1 01:09:10.736 01:09:10.736 ' 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:10.736 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:10.737 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:09:11.001 Cannot find device "nvmf_init_br" 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:09:11.001 Cannot find device "nvmf_init_br2" 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:09:11.001 Cannot find device "nvmf_tgt_br" 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:09:11.001 Cannot find device "nvmf_tgt_br2" 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:09:11.001 Cannot find device "nvmf_init_br" 01:09:11.001 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:09:11.002 Cannot find device "nvmf_init_br2" 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:09:11.002 Cannot find device "nvmf_tgt_br" 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:09:11.002 Cannot find device "nvmf_tgt_br2" 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:09:11.002 Cannot find device "nvmf_br" 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:09:11.002 Cannot find device "nvmf_init_if" 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:09:11.002 Cannot find device "nvmf_init_if2" 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:11.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:11.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:11.002 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:11.260 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:09:11.261 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:11.261 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 01:09:11.261 01:09:11.261 --- 10.0.0.3 ping statistics --- 01:09:11.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:11.261 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:09:11.261 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:09:11.261 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 01:09:11.261 01:09:11.261 --- 10.0.0.4 ping statistics --- 01:09:11.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:11.261 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:11.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:11.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:09:11.261 01:09:11.261 --- 10.0.0.1 ping statistics --- 01:09:11.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:11.261 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:09:11.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:11.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 01:09:11.261 01:09:11.261 --- 10.0.0.2 ping statistics --- 01:09:11.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:11.261 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=97608 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 97608 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 97608 ']' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:11.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:11.261 06:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:09:11.261 [2024-12-09 06:08:05.845145] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:09:11.520 [2024-12-09 06:08:05.846452] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:11.520 [2024-12-09 06:08:05.846523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:11.520 [2024-12-09 06:08:05.999876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:09:11.520 [2024-12-09 06:08:06.038328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:11.520 [2024-12-09 06:08:06.038404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:11.520 [2024-12-09 06:08:06.038429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:11.520 [2024-12-09 06:08:06.038439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:11.520 [2024-12-09 06:08:06.038448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:11.520 [2024-12-09 06:08:06.039319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:09:11.520 [2024-12-09 06:08:06.039451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:09:11.520 [2024-12-09 06:08:06.039459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:11.520 [2024-12-09 06:08:06.095695] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:09:11.520 [2024-12-09 06:08:06.096683] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:09:11.520 [2024-12-09 06:08:06.096699] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:09:11.520 [2024-12-09 06:08:06.096737] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:09:11.778 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:11.778 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 01:09:11.778 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:09:11.778 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:11.778 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:09:11.778 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:11.778 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 01:09:11.778 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:09:12.037 [2024-12-09 06:08:06.452429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:12.037 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:09:12.295 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:09:12.553 [2024-12-09 06:08:06.976965] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:09:12.553 06:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:09:12.811 06:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 01:09:13.070 Malloc0 01:09:13.070 06:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:09:13.381 Delay0 01:09:13.381 06:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:13.657 06:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 01:09:13.916 NULL1 01:09:13.916 06:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 01:09:14.174 06:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 01:09:14.174 06:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=97727 01:09:14.174 06:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:14.174 06:08:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:15.551 Read completed with error (sct=0, sc=11) 01:09:15.551 06:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:15.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:15.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:15.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:15.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:15.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:15.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:15.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:15.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:15.810 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 01:09:15.810 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 01:09:16.069 true 01:09:16.069 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:16.069 06:08:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:17.007 06:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:17.007 06:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 01:09:17.007 06:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 01:09:17.265 true 01:09:17.265 06:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:17.265 06:08:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:17.524 06:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:17.781 06:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 01:09:17.781 06:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 01:09:18.039 true 01:09:18.039 06:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:18.039 06:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:18.296 06:08:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:18.554 06:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 01:09:18.554 06:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 01:09:18.811 true 01:09:18.811 06:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:18.811 06:08:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:19.747 06:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:20.004 06:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 01:09:20.004 06:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 01:09:20.262 true 01:09:20.262 06:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:20.262 06:08:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:20.519 06:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:20.776 06:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 01:09:20.776 06:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 01:09:21.034 true 01:09:21.034 06:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:21.034 06:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:21.291 06:08:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:21.548 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 01:09:21.548 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 01:09:21.809 true 01:09:21.809 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:21.809 06:08:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:22.743 06:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:22.743 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 01:09:23.001 06:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 01:09:23.001 06:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 01:09:23.260 true 01:09:23.518 06:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:23.518 06:08:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:23.777 06:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:24.035 06:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 01:09:24.035 06:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 01:09:24.293 true 01:09:24.293 06:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:24.293 06:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:24.552 06:08:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:24.811 06:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 01:09:24.811 06:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 01:09:25.070 true 01:09:25.070 06:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:25.070 06:08:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:26.006 06:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:26.006 06:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 01:09:26.006 06:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 01:09:26.265 true 01:09:26.266 06:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:26.266 06:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:26.525 06:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:26.828 06:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 01:09:26.828 06:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 01:09:27.109 true 01:09:27.109 06:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:27.109 06:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:27.367 06:08:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:27.626 06:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 01:09:27.626 06:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 01:09:27.885 true 01:09:27.885 06:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:27.885 06:08:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:28.820 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:29.079 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 01:09:29.079 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 01:09:29.382 true 01:09:29.382 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:29.382 06:08:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:29.640 06:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:29.898 06:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 01:09:29.898 06:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 01:09:30.156 true 01:09:30.156 06:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:30.156 06:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:30.414 06:08:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:30.671 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 01:09:30.671 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 01:09:30.929 true 01:09:30.929 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:30.929 06:08:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:31.863 06:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:32.121 06:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 01:09:32.121 06:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 01:09:32.380 true 01:09:32.380 06:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:32.380 06:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:32.638 06:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:32.897 06:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 01:09:32.897 06:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 01:09:33.156 true 01:09:33.416 06:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:33.416 06:08:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:33.675 06:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:33.934 06:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 01:09:33.934 06:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 01:09:34.193 true 01:09:34.193 06:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:34.193 06:08:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:34.762 06:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:35.020 06:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 01:09:35.020 06:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 01:09:35.279 true 01:09:35.280 06:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:35.280 06:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:35.539 06:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:36.108 06:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 01:09:36.108 06:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 01:09:36.108 true 01:09:36.108 06:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:36.108 06:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:36.367 06:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:36.633 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 01:09:36.633 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 01:09:36.893 true 01:09:36.893 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:36.893 06:08:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:37.828 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:38.087 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 01:09:38.087 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 01:09:38.346 true 01:09:38.346 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:38.346 06:08:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:38.914 06:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:38.914 06:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 01:09:38.914 06:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 01:09:39.173 true 01:09:39.432 06:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:39.432 06:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:39.690 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:39.971 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 01:09:39.971 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 01:09:40.234 true 01:09:40.234 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:40.234 06:08:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:40.813 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:41.071 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 01:09:41.071 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 01:09:41.329 true 01:09:41.329 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:41.329 06:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:41.588 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:41.847 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 01:09:41.847 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 01:09:42.104 true 01:09:42.362 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:42.362 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:42.620 06:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:42.878 06:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 01:09:42.878 06:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 01:09:43.136 true 01:09:43.137 06:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:43.137 06:08:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:44.071 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:44.330 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 01:09:44.330 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 01:09:44.330 true 01:09:44.330 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:44.330 06:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:44.590 Initializing NVMe Controllers 01:09:44.590 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:09:44.590 Controller IO queue size 128, less than required. 01:09:44.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:44.590 Controller IO queue size 128, less than required. 01:09:44.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:44.590 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:09:44.590 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 01:09:44.590 Initialization complete. Launching workers. 01:09:44.590 ======================================================== 01:09:44.590 Latency(us) 01:09:44.590 Device Information : IOPS MiB/s Average min max 01:09:44.591 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 381.00 0.19 135355.93 3178.21 1023991.30 01:09:44.591 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8325.10 4.06 15375.07 2898.75 648833.00 01:09:44.591 ======================================================== 01:09:44.591 Total : 8706.10 4.25 20625.76 2898.75 1023991.30 01:09:44.591 01:09:44.591 06:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:44.850 06:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 01:09:44.850 06:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 01:09:45.418 true 01:09:45.418 06:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97727 01:09:45.418 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (97727) - No such process 01:09:45.418 06:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 97727 01:09:45.418 06:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:45.418 06:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:45.677 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 01:09:45.677 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 01:09:45.677 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 01:09:45.677 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:45.677 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 01:09:45.936 null0 01:09:45.936 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:09:45.936 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:45.936 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 01:09:46.195 null1 01:09:46.195 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:09:46.195 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:46.195 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 01:09:46.454 null2 01:09:46.454 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:09:46.454 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:46.454 06:08:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 01:09:46.713 null3 01:09:46.713 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:09:46.713 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:46.713 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 01:09:46.972 null4 01:09:46.972 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:09:46.972 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:46.972 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 01:09:47.231 null5 01:09:47.231 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:09:47.231 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:47.231 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 01:09:47.490 null6 01:09:47.490 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:09:47.490 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:47.490 06:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 01:09:47.750 null7 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:47.750 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 98769 98770 98772 98773 98776 98778 98779 98781 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:47.751 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:48.010 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:48.010 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:48.010 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:48.010 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:48.010 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:48.269 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:48.528 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:48.529 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:48.529 06:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:48.529 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:48.788 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:48.788 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:48.788 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:48.788 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:48.788 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:48.788 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:48.788 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.047 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:49.306 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:49.306 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:49.306 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:49.306 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:49.306 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:49.306 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:49.306 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:49.564 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:49.564 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.564 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.564 06:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.564 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:49.823 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.081 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:50.341 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:50.600 06:08:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:50.600 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:50.601 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:50.601 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:50.601 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:50.858 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:51.115 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:51.115 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.115 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.115 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:51.115 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:51.115 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:51.116 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:51.116 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:51.116 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.374 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:51.632 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.632 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.632 06:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:51.632 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:51.890 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:52.147 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:52.405 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:52.405 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:52.405 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:52.405 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:52.405 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:52.405 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:52.664 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.664 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.664 06:08:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:52.664 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:52.922 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:52.922 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:52.922 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:52.922 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:52.922 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.180 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 01:09:53.437 06:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:53.703 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:09:54.023 rmmod nvme_tcp 01:09:54.023 rmmod nvme_fabrics 01:09:54.023 rmmod nvme_keyring 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 97608 ']' 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 97608 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 97608 ']' 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 97608 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:09:54.023 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97608 01:09:54.281 killing process with pid 97608 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97608' 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 97608 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 97608 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:09:54.281 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 01:09:54.539 ************************************ 01:09:54.539 END TEST nvmf_ns_hotplug_stress 01:09:54.539 ************************************ 01:09:54.539 01:09:54.539 real 0m43.853s 01:09:54.539 user 3m18.291s 01:09:54.539 sys 0m18.481s 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 01:09:54.539 06:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 01:09:54.539 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 01:09:54.539 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:09:54.539 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:09:54.539 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:09:54.539 ************************************ 01:09:54.539 START TEST nvmf_delete_subsystem 01:09:54.539 ************************************ 01:09:54.539 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 01:09:54.539 * Looking for test storage... 01:09:54.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:09:54.539 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:09:54.539 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 01:09:54.539 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 01:09:54.798 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:09:54.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:54.799 --rc genhtml_branch_coverage=1 01:09:54.799 --rc genhtml_function_coverage=1 01:09:54.799 --rc genhtml_legend=1 01:09:54.799 --rc geninfo_all_blocks=1 01:09:54.799 --rc geninfo_unexecuted_blocks=1 01:09:54.799 01:09:54.799 ' 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:09:54.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:54.799 --rc genhtml_branch_coverage=1 01:09:54.799 --rc genhtml_function_coverage=1 01:09:54.799 --rc genhtml_legend=1 01:09:54.799 --rc geninfo_all_blocks=1 01:09:54.799 --rc geninfo_unexecuted_blocks=1 01:09:54.799 01:09:54.799 ' 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:09:54.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:54.799 --rc genhtml_branch_coverage=1 01:09:54.799 --rc genhtml_function_coverage=1 01:09:54.799 --rc genhtml_legend=1 01:09:54.799 --rc geninfo_all_blocks=1 01:09:54.799 --rc geninfo_unexecuted_blocks=1 01:09:54.799 01:09:54.799 ' 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:09:54.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:09:54.799 --rc genhtml_branch_coverage=1 01:09:54.799 --rc genhtml_function_coverage=1 01:09:54.799 --rc genhtml_legend=1 01:09:54.799 --rc geninfo_all_blocks=1 01:09:54.799 --rc geninfo_unexecuted_blocks=1 01:09:54.799 01:09:54.799 ' 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:54.799 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:09:54.800 Cannot find device "nvmf_init_br" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:09:54.800 Cannot find device "nvmf_init_br2" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:09:54.800 Cannot find device "nvmf_tgt_br" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:09:54.800 Cannot find device "nvmf_tgt_br2" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:09:54.800 Cannot find device "nvmf_init_br" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:09:54.800 Cannot find device "nvmf_init_br2" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:09:54.800 Cannot find device "nvmf_tgt_br" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:09:54.800 Cannot find device "nvmf_tgt_br2" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:09:54.800 Cannot find device "nvmf_br" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:09:54.800 Cannot find device "nvmf_init_if" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:09:54.800 Cannot find device "nvmf_init_if2" 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:09:54.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:54.800 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 01:09:54.801 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:09:54.801 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:09:54.801 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 01:09:54.801 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:09:54.801 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:09:54.801 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:09:54.801 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:09:55.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:09:55.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 01:09:55.059 01:09:55.059 --- 10.0.0.3 ping statistics --- 01:09:55.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:55.059 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:09:55.059 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:09:55.059 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 01:09:55.059 01:09:55.059 --- 10.0.0.4 ping statistics --- 01:09:55.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:55.059 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:09:55.059 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:09:55.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:09:55.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:09:55.059 01:09:55.059 --- 10.0.0.1 ping statistics --- 01:09:55.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:55.059 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:09:55.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:09:55.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 01:09:55.060 01:09:55.060 --- 10.0.0.2 ping statistics --- 01:09:55.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:09:55.060 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 01:09:55.060 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=100154 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 100154 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 100154 ']' 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:09:55.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 01:09:55.318 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.318 [2024-12-09 06:08:49.694713] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:09:55.318 [2024-12-09 06:08:49.695852] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:09:55.318 [2024-12-09 06:08:49.695963] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:09:55.318 [2024-12-09 06:08:49.844215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:09:55.318 [2024-12-09 06:08:49.884020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:09:55.318 [2024-12-09 06:08:49.884099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:09:55.318 [2024-12-09 06:08:49.884125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:09:55.318 [2024-12-09 06:08:49.884135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:09:55.318 [2024-12-09 06:08:49.884144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:09:55.318 [2024-12-09 06:08:49.885015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:09:55.318 [2024-12-09 06:08:49.885029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:09:55.576 [2024-12-09 06:08:49.942868] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:09:55.576 [2024-12-09 06:08:49.943612] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:09:55.577 [2024-12-09 06:08:49.943771] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:09:55.577 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:09:55.577 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 01:09:55.577 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:09:55.577 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 01:09:55.577 06:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.577 [2024-12-09 06:08:50.017970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.577 [2024-12-09 06:08:50.038208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.577 NULL1 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.577 Delay0 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=100196 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 01:09:55.577 06:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 01:09:55.835 [2024-12-09 06:08:50.239277] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:09:57.739 06:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:09:57.739 06:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:57.739 06:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 [2024-12-09 06:08:52.275502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1a7e0 is same with the state(6) to be set 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Write completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 starting I/O failed: -6 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.739 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 starting I/O failed: -6 01:09:57.740 [2024-12-09 06:08:52.277047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f270c000c40 is same with the state(6) to be set 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Write completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:57.740 Read completed with error (sct=0, sc=8) 01:09:58.677 [2024-12-09 06:08:53.254083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0eaa0 is same with the state(6) to be set 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 [2024-12-09 06:08:53.275320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1cea0 is same with the state(6) to be set 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 [2024-12-09 06:08:53.276267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f270c00d020 is same with the state(6) to be set 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 [2024-12-09 06:08:53.276569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f270c00d800 is same with the state(6) to be set 01:09:58.936 Write completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.936 Read completed with error (sct=0, sc=8) 01:09:58.937 Read completed with error (sct=0, sc=8) 01:09:58.937 Read completed with error (sct=0, sc=8) 01:09:58.937 Read completed with error (sct=0, sc=8) 01:09:58.937 Read completed with error (sct=0, sc=8) 01:09:58.937 Read completed with error (sct=0, sc=8) 01:09:58.937 [2024-12-09 06:08:53.277480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c19a50 is same with the state(6) to be set 01:09:58.937 Initializing NVMe Controllers 01:09:58.937 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:09:58.937 Controller IO queue size 128, less than required. 01:09:58.937 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:09:58.937 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:09:58.937 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:09:58.937 Initialization complete. Launching workers. 01:09:58.937 ======================================================== 01:09:58.937 Latency(us) 01:09:58.937 Device Information : IOPS MiB/s Average min max 01:09:58.937 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.82 0.08 914197.71 1464.51 1013134.92 01:09:58.937 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.21 0.09 980210.89 511.14 2001781.45 01:09:58.937 ======================================================== 01:09:58.937 Total : 338.03 0.17 948609.86 511.14 2001781.45 01:09:58.937 01:09:58.937 [2024-12-09 06:08:53.278060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0eaa0 (9): Bad file descriptor 01:09:58.937 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 01:09:58.937 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:58.937 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 01:09:58.937 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100196 01:09:58.937 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100196 01:09:59.506 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (100196) - No such process 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 100196 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 100196 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 100196 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:59.506 [2024-12-09 06:08:53.806550] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=100236 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100236 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 01:09:59.506 06:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:09:59.506 [2024-12-09 06:08:53.985930] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 01:09:59.765 06:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:09:59.765 06:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100236 01:09:59.765 06:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:10:00.334 06:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:10:00.334 06:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100236 01:10:00.334 06:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:10:00.909 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:10:00.909 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100236 01:10:00.909 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:10:01.476 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:10:01.476 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100236 01:10:01.476 06:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:10:02.042 06:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:10:02.042 06:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100236 01:10:02.042 06:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:10:02.300 06:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:10:02.300 06:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100236 01:10:02.300 06:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 01:10:02.560 Initializing NVMe Controllers 01:10:02.560 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:10:02.560 Controller IO queue size 128, less than required. 01:10:02.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:10:02.560 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:10:02.560 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:10:02.560 Initialization complete. Launching workers. 01:10:02.560 ======================================================== 01:10:02.560 Latency(us) 01:10:02.560 Device Information : IOPS MiB/s Average min max 01:10:02.560 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002813.90 1000281.48 1007296.01 01:10:02.560 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004795.46 1000274.86 1011585.19 01:10:02.560 ======================================================== 01:10:02.560 Total : 256.00 0.12 1003804.68 1000274.86 1011585.19 01:10:02.560 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100236 01:10:02.819 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (100236) - No such process 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 100236 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 01:10:02.819 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:10:02.819 rmmod nvme_tcp 01:10:02.819 rmmod nvme_fabrics 01:10:03.079 rmmod nvme_keyring 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 100154 ']' 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 100154 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 100154 ']' 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 100154 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100154 01:10:03.079 killing process with pid 100154 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100154' 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 100154 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 100154 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:10:03.079 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 01:10:03.338 01:10:03.338 real 0m8.863s 01:10:03.338 user 0m23.762s 01:10:03.338 sys 0m2.541s 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:03.338 ************************************ 01:10:03.338 END TEST nvmf_delete_subsystem 01:10:03.338 ************************************ 01:10:03.338 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 01:10:03.600 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 01:10:03.600 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:10:03.600 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:03.600 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:10:03.600 ************************************ 01:10:03.600 START TEST nvmf_host_management 01:10:03.600 ************************************ 01:10:03.600 06:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 01:10:03.600 * Looking for test storage... 01:10:03.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:03.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:03.600 --rc genhtml_branch_coverage=1 01:10:03.600 --rc genhtml_function_coverage=1 01:10:03.600 --rc genhtml_legend=1 01:10:03.600 --rc geninfo_all_blocks=1 01:10:03.600 --rc geninfo_unexecuted_blocks=1 01:10:03.600 01:10:03.600 ' 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:03.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:03.600 --rc genhtml_branch_coverage=1 01:10:03.600 --rc genhtml_function_coverage=1 01:10:03.600 --rc genhtml_legend=1 01:10:03.600 --rc geninfo_all_blocks=1 01:10:03.600 --rc geninfo_unexecuted_blocks=1 01:10:03.600 01:10:03.600 ' 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:03.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:03.600 --rc genhtml_branch_coverage=1 01:10:03.600 --rc genhtml_function_coverage=1 01:10:03.600 --rc genhtml_legend=1 01:10:03.600 --rc geninfo_all_blocks=1 01:10:03.600 --rc geninfo_unexecuted_blocks=1 01:10:03.600 01:10:03.600 ' 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:03.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:03.600 --rc genhtml_branch_coverage=1 01:10:03.600 --rc genhtml_function_coverage=1 01:10:03.600 --rc genhtml_legend=1 01:10:03.600 --rc geninfo_all_blocks=1 01:10:03.600 --rc geninfo_unexecuted_blocks=1 01:10:03.600 01:10:03.600 ' 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:03.600 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:10:03.601 Cannot find device "nvmf_init_br" 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 01:10:03.601 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:10:03.860 Cannot find device "nvmf_init_br2" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:10:03.860 Cannot find device "nvmf_tgt_br" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:10:03.860 Cannot find device "nvmf_tgt_br2" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:10:03.860 Cannot find device "nvmf_init_br" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:10:03.860 Cannot find device "nvmf_init_br2" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:10:03.860 Cannot find device "nvmf_tgt_br" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:10:03.860 Cannot find device "nvmf_tgt_br2" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:10:03.860 Cannot find device "nvmf_br" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:10:03.860 Cannot find device "nvmf_init_if" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:10:03.860 Cannot find device "nvmf_init_if2" 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:03.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:03.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:10:03.860 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:10:04.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:04.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 01:10:04.119 01:10:04.119 --- 10.0.0.3 ping statistics --- 01:10:04.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:04.119 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:10:04.119 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:10:04.119 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 01:10:04.119 01:10:04.119 --- 10.0.0.4 ping statistics --- 01:10:04.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:04.119 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:04.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:04.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:10:04.119 01:10:04.119 --- 10.0.0.1 ping statistics --- 01:10:04.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:04.119 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:10:04.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:04.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 01:10:04.119 01:10:04.119 --- 10.0.0.2 ping statistics --- 01:10:04.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:04.119 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=100515 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 100515 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100515 ']' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:04.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:04.119 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:04.119 [2024-12-09 06:08:58.661782] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:10:04.119 [2024-12-09 06:08:58.663231] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:04.119 [2024-12-09 06:08:58.663310] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:04.377 [2024-12-09 06:08:58.814626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:10:04.377 [2024-12-09 06:08:58.846607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:04.377 [2024-12-09 06:08:58.846862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:04.377 [2024-12-09 06:08:58.847055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:04.377 [2024-12-09 06:08:58.847263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:04.377 [2024-12-09 06:08:58.847423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:04.377 [2024-12-09 06:08:58.848204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:10:04.377 [2024-12-09 06:08:58.848253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:10:04.377 [2024-12-09 06:08:58.848367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:10:04.377 [2024-12-09 06:08:58.848492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:10:04.377 [2024-12-09 06:08:58.896326] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:10:04.377 [2024-12-09 06:08:58.896459] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:10:04.377 [2024-12-09 06:08:58.896769] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:10:04.377 [2024-12-09 06:08:58.897165] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:10:04.377 [2024-12-09 06:08:58.897985] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:10:04.377 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:04.377 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:10:04.377 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:10:04.377 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:04.377 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:04.636 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:04.636 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:10:04.636 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:04.636 06:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:04.636 [2024-12-09 06:08:58.977110] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:04.636 Malloc0 01:10:04.636 [2024-12-09 06:08:59.053370] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:04.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=100579 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 100579 /var/tmp/bdevperf.sock 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100579 ']' 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:10:04.636 { 01:10:04.636 "params": { 01:10:04.636 "name": "Nvme$subsystem", 01:10:04.636 "trtype": "$TEST_TRANSPORT", 01:10:04.636 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:04.636 "adrfam": "ipv4", 01:10:04.636 "trsvcid": "$NVMF_PORT", 01:10:04.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:04.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:04.636 "hdgst": ${hdgst:-false}, 01:10:04.636 "ddgst": ${ddgst:-false} 01:10:04.636 }, 01:10:04.636 "method": "bdev_nvme_attach_controller" 01:10:04.636 } 01:10:04.636 EOF 01:10:04.636 )") 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:10:04.636 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:10:04.636 "params": { 01:10:04.636 "name": "Nvme0", 01:10:04.636 "trtype": "tcp", 01:10:04.636 "traddr": "10.0.0.3", 01:10:04.636 "adrfam": "ipv4", 01:10:04.636 "trsvcid": "4420", 01:10:04.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:04.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:10:04.636 "hdgst": false, 01:10:04.636 "ddgst": false 01:10:04.636 }, 01:10:04.636 "method": "bdev_nvme_attach_controller" 01:10:04.636 }' 01:10:04.636 [2024-12-09 06:08:59.151756] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:04.636 [2024-12-09 06:08:59.151835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100579 ] 01:10:04.895 [2024-12-09 06:08:59.295917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:04.895 [2024-12-09 06:08:59.329755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:10:04.895 Running I/O for 10 seconds... 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 01:10:05.153 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=552 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 552 -ge 100 ']' 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:05.416 [2024-12-09 06:08:59.897107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.897459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab2a0 is same with the state(6) to be set 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 01:10:05.416 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:05.416 [2024-12-09 06:08:59.905059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 01:10:05.416 [2024-12-09 06:08:59.905107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.416 [2024-12-09 06:08:59.905138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 01:10:05.416 [2024-12-09 06:08:59.905148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.416 [2024-12-09 06:08:59.905158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 01:10:05.416 [2024-12-09 06:08:59.905167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.416 [2024-12-09 06:08:59.905177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 01:10:05.416 [2024-12-09 06:08:59.905186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.416 [2024-12-09 06:08:59.905195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211e130 is same with the state(6) to be set 01:10:05.416 [2024-12-09 06:08:59.905598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.416 [2024-12-09 06:08:59.905629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.416 [2024-12-09 06:08:59.905662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.416 [2024-12-09 06:08:59.905676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.905988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.905999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.417 [2024-12-09 06:08:59.906447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.417 [2024-12-09 06:08:59.906456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.906966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 01:10:05.418 [2024-12-09 06:08:59.906976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 01:10:05.418 [2024-12-09 06:08:59.907007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 01:10:05.418 [2024-12-09 06:08:59.908175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 01:10:05.418 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:10:05.418 06:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 01:10:05.418 task offset: 81920 on job bdev=Nvme0n1 fails 01:10:05.418 01:10:05.418 Latency(us) 01:10:05.418 [2024-12-09T06:09:00.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:05.418 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:10:05.418 Job: Nvme0n1 ended in about 0.44 seconds with error 01:10:05.418 Verification LBA range: start 0x0 length 0x400 01:10:05.418 Nvme0n1 : 0.44 1457.70 91.11 145.77 0.00 38538.13 2383.13 37891.72 01:10:05.418 [2024-12-09T06:09:00.004Z] =================================================================================================================== 01:10:05.418 [2024-12-09T06:09:00.004Z] Total : 1457.70 91.11 145.77 0.00 38538.13 2383.13 37891.72 01:10:05.418 [2024-12-09 06:08:59.910171] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 01:10:05.418 [2024-12-09 06:08:59.910204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211e130 (9): Bad file descriptor 01:10:05.418 [2024-12-09 06:08:59.913244] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 01:10:06.368 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 100579 01:10:06.368 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (100579) - No such process 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:10:06.369 { 01:10:06.369 "params": { 01:10:06.369 "name": "Nvme$subsystem", 01:10:06.369 "trtype": "$TEST_TRANSPORT", 01:10:06.369 "traddr": "$NVMF_FIRST_TARGET_IP", 01:10:06.369 "adrfam": "ipv4", 01:10:06.369 "trsvcid": "$NVMF_PORT", 01:10:06.369 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:10:06.369 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:10:06.369 "hdgst": ${hdgst:-false}, 01:10:06.369 "ddgst": ${ddgst:-false} 01:10:06.369 }, 01:10:06.369 "method": "bdev_nvme_attach_controller" 01:10:06.369 } 01:10:06.369 EOF 01:10:06.369 )") 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 01:10:06.369 06:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:10:06.369 "params": { 01:10:06.369 "name": "Nvme0", 01:10:06.369 "trtype": "tcp", 01:10:06.369 "traddr": "10.0.0.3", 01:10:06.369 "adrfam": "ipv4", 01:10:06.369 "trsvcid": "4420", 01:10:06.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:06.369 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:10:06.369 "hdgst": false, 01:10:06.369 "ddgst": false 01:10:06.369 }, 01:10:06.369 "method": "bdev_nvme_attach_controller" 01:10:06.369 }' 01:10:06.628 [2024-12-09 06:09:00.975291] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:06.628 [2024-12-09 06:09:00.975420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100614 ] 01:10:06.628 [2024-12-09 06:09:01.122814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:06.628 [2024-12-09 06:09:01.158206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:10:06.886 Running I/O for 1 seconds... 01:10:07.821 1536.00 IOPS, 96.00 MiB/s 01:10:07.821 Latency(us) 01:10:07.821 [2024-12-09T06:09:02.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:07.821 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 01:10:07.821 Verification LBA range: start 0x0 length 0x400 01:10:07.821 Nvme0n1 : 1.02 1566.09 97.88 0.00 0.00 40054.07 4527.94 36461.85 01:10:07.821 [2024-12-09T06:09:02.407Z] =================================================================================================================== 01:10:07.821 [2024-12-09T06:09:02.407Z] Total : 1566.09 97.88 0.00 0.00 40054.07 4527.94 36461.85 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:10:08.080 rmmod nvme_tcp 01:10:08.080 rmmod nvme_fabrics 01:10:08.080 rmmod nvme_keyring 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:10:08.080 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 100515 ']' 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 100515 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 100515 ']' 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 100515 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100515 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:10:08.081 killing process with pid 100515 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100515' 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 100515 01:10:08.081 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 100515 01:10:08.340 [2024-12-09 06:09:02.733670] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:10:08.340 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:10:08.599 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:10:08.599 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:08.599 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:08.599 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 01:10:08.599 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:08.599 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:08.599 06:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 01:10:08.599 ************************************ 01:10:08.599 END TEST nvmf_host_management 01:10:08.599 ************************************ 01:10:08.599 01:10:08.599 real 0m5.081s 01:10:08.599 user 0m15.889s 01:10:08.599 sys 0m2.424s 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:10:08.599 ************************************ 01:10:08.599 START TEST nvmf_lvol 01:10:08.599 ************************************ 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 01:10:08.599 * Looking for test storage... 01:10:08.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 01:10:08.599 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:08.860 --rc genhtml_branch_coverage=1 01:10:08.860 --rc genhtml_function_coverage=1 01:10:08.860 --rc genhtml_legend=1 01:10:08.860 --rc geninfo_all_blocks=1 01:10:08.860 --rc geninfo_unexecuted_blocks=1 01:10:08.860 01:10:08.860 ' 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:08.860 --rc genhtml_branch_coverage=1 01:10:08.860 --rc genhtml_function_coverage=1 01:10:08.860 --rc genhtml_legend=1 01:10:08.860 --rc geninfo_all_blocks=1 01:10:08.860 --rc geninfo_unexecuted_blocks=1 01:10:08.860 01:10:08.860 ' 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:08.860 --rc genhtml_branch_coverage=1 01:10:08.860 --rc genhtml_function_coverage=1 01:10:08.860 --rc genhtml_legend=1 01:10:08.860 --rc geninfo_all_blocks=1 01:10:08.860 --rc geninfo_unexecuted_blocks=1 01:10:08.860 01:10:08.860 ' 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:08.860 --rc genhtml_branch_coverage=1 01:10:08.860 --rc genhtml_function_coverage=1 01:10:08.860 --rc genhtml_legend=1 01:10:08.860 --rc geninfo_all_blocks=1 01:10:08.860 --rc geninfo_unexecuted_blocks=1 01:10:08.860 01:10:08.860 ' 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:08.860 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:10:08.861 Cannot find device "nvmf_init_br" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:10:08.861 Cannot find device "nvmf_init_br2" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:10:08.861 Cannot find device "nvmf_tgt_br" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:10:08.861 Cannot find device "nvmf_tgt_br2" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:10:08.861 Cannot find device "nvmf_init_br" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:10:08.861 Cannot find device "nvmf_init_br2" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:10:08.861 Cannot find device "nvmf_tgt_br" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:10:08.861 Cannot find device "nvmf_tgt_br2" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:10:08.861 Cannot find device "nvmf_br" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:10:08.861 Cannot find device "nvmf_init_if" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:10:08.861 Cannot find device "nvmf_init_if2" 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:08.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:08.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:08.861 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:10:09.121 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:10:09.122 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:09.122 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 01:10:09.122 01:10:09.122 --- 10.0.0.3 ping statistics --- 01:10:09.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:09.122 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:10:09.122 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:10:09.122 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 01:10:09.122 01:10:09.122 --- 10.0.0.4 ping statistics --- 01:10:09.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:09.122 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:09.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:09.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 01:10:09.122 01:10:09.122 --- 10.0.0.1 ping statistics --- 01:10:09.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:09.122 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:10:09.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:09.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 01:10:09.122 01:10:09.122 --- 10.0.0.2 ping statistics --- 01:10:09.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:09.122 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=100874 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 100874 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 100874 ']' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:09.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:09.122 06:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:10:09.381 [2024-12-09 06:09:03.759423] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:10:09.381 [2024-12-09 06:09:03.760733] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:09.381 [2024-12-09 06:09:03.760814] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:09.381 [2024-12-09 06:09:03.915796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:10:09.640 [2024-12-09 06:09:03.975244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:09.640 [2024-12-09 06:09:03.975327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:09.640 [2024-12-09 06:09:03.975348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:09.640 [2024-12-09 06:09:03.975364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:09.640 [2024-12-09 06:09:03.975379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:09.640 [2024-12-09 06:09:03.976528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:10:09.640 [2024-12-09 06:09:03.976697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:10:09.640 [2024-12-09 06:09:03.976708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:10:09.640 [2024-12-09 06:09:04.038301] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:10:09.640 [2024-12-09 06:09:04.038483] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:10:09.640 [2024-12-09 06:09:04.038826] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:10:09.640 [2024-12-09 06:09:04.039548] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:10:09.640 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:09.640 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 01:10:09.640 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:10:09.640 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:09.640 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:10:09.640 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:09.640 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:10:09.904 [2024-12-09 06:09:04.349895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:09.904 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:10.163 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 01:10:10.163 06:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:10:10.729 06:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 01:10:10.729 06:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 01:10:10.729 06:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 01:10:11.295 06:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a7bd2655-889b-4ed7-81f5-de1904587262 01:10:11.295 06:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a7bd2655-889b-4ed7-81f5-de1904587262 lvol 20 01:10:11.553 06:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=27695e6f-ab31-4d73-8aba-e0db3b9a938a 01:10:11.553 06:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:10:11.812 06:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27695e6f-ab31-4d73-8aba-e0db3b9a938a 01:10:12.071 06:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:10:12.331 [2024-12-09 06:09:06.685857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:10:12.331 06:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:10:12.589 06:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101008 01:10:12.589 06:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 01:10:12.589 06:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 01:10:13.524 06:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 27695e6f-ab31-4d73-8aba-e0db3b9a938a MY_SNAPSHOT 01:10:13.801 06:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3d5fe90f-8c9e-4e71-b491-5c0d39d504df 01:10:13.801 06:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 27695e6f-ab31-4d73-8aba-e0db3b9a938a 30 01:10:14.060 06:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 3d5fe90f-8c9e-4e71-b491-5c0d39d504df MY_CLONE 01:10:14.624 06:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=64277eb2-7042-4726-b66a-af06e3b29e84 01:10:14.625 06:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 64277eb2-7042-4726-b66a-af06e3b29e84 01:10:15.190 06:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101008 01:10:23.301 Initializing NVMe Controllers 01:10:23.301 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 01:10:23.301 Controller IO queue size 128, less than required. 01:10:23.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:10:23.301 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 01:10:23.301 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 01:10:23.301 Initialization complete. Launching workers. 01:10:23.301 ======================================================== 01:10:23.301 Latency(us) 01:10:23.301 Device Information : IOPS MiB/s Average min max 01:10:23.301 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10415.50 40.69 12296.78 2192.89 49853.66 01:10:23.301 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10782.40 42.12 11876.75 2334.00 50356.55 01:10:23.301 ======================================================== 01:10:23.301 Total : 21197.90 82.80 12083.13 2192.89 50356.55 01:10:23.301 01:10:23.301 06:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:23.301 06:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 27695e6f-ab31-4d73-8aba-e0db3b9a938a 01:10:23.301 06:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a7bd2655-889b-4ed7-81f5-de1904587262 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:10:23.561 rmmod nvme_tcp 01:10:23.561 rmmod nvme_fabrics 01:10:23.561 rmmod nvme_keyring 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 100874 ']' 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 100874 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 100874 ']' 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 100874 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:23.561 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100874 01:10:23.821 killing process with pid 100874 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100874' 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 100874 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 100874 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:10:23.821 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 01:10:24.080 ************************************ 01:10:24.080 END TEST nvmf_lvol 01:10:24.080 ************************************ 01:10:24.080 01:10:24.080 real 0m15.490s 01:10:24.080 user 0m55.724s 01:10:24.080 sys 0m5.671s 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:24.080 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:10:24.080 ************************************ 01:10:24.080 START TEST nvmf_lvs_grow 01:10:24.081 ************************************ 01:10:24.081 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 01:10:24.341 * Looking for test storage... 01:10:24.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:10:24.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:24.341 --rc genhtml_branch_coverage=1 01:10:24.341 --rc genhtml_function_coverage=1 01:10:24.341 --rc genhtml_legend=1 01:10:24.341 --rc geninfo_all_blocks=1 01:10:24.341 --rc geninfo_unexecuted_blocks=1 01:10:24.341 01:10:24.341 ' 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:10:24.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:24.341 --rc genhtml_branch_coverage=1 01:10:24.341 --rc genhtml_function_coverage=1 01:10:24.341 --rc genhtml_legend=1 01:10:24.341 --rc geninfo_all_blocks=1 01:10:24.341 --rc geninfo_unexecuted_blocks=1 01:10:24.341 01:10:24.341 ' 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:10:24.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:24.341 --rc genhtml_branch_coverage=1 01:10:24.341 --rc genhtml_function_coverage=1 01:10:24.341 --rc genhtml_legend=1 01:10:24.341 --rc geninfo_all_blocks=1 01:10:24.341 --rc geninfo_unexecuted_blocks=1 01:10:24.341 01:10:24.341 ' 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:10:24.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:10:24.341 --rc genhtml_branch_coverage=1 01:10:24.341 --rc genhtml_function_coverage=1 01:10:24.341 --rc genhtml_legend=1 01:10:24.341 --rc geninfo_all_blocks=1 01:10:24.341 --rc geninfo_unexecuted_blocks=1 01:10:24.341 01:10:24.341 ' 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:10:24.341 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:10:24.342 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:10:24.343 Cannot find device "nvmf_init_br" 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:10:24.343 Cannot find device "nvmf_init_br2" 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:10:24.343 Cannot find device "nvmf_tgt_br" 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:10:24.343 Cannot find device "nvmf_tgt_br2" 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:10:24.343 Cannot find device "nvmf_init_br" 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:10:24.343 Cannot find device "nvmf_init_br2" 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 01:10:24.343 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:10:24.602 Cannot find device "nvmf_tgt_br" 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:10:24.602 Cannot find device "nvmf_tgt_br2" 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:10:24.602 Cannot find device "nvmf_br" 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:10:24.602 Cannot find device "nvmf_init_if" 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:10:24.602 Cannot find device "nvmf_init_if2" 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:10:24.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:10:24.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:10:24.602 06:09:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:10:24.602 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:10:24.602 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:10:24.602 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:10:24.602 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:10:24.602 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:10:24.602 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:10:24.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:10:24.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 01:10:24.603 01:10:24.603 --- 10.0.0.3 ping statistics --- 01:10:24.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:24.603 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:10:24.603 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:10:24.603 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:10:24.603 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 01:10:24.603 01:10:24.603 --- 10.0.0.4 ping statistics --- 01:10:24.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:24.603 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:10:24.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:10:24.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 01:10:24.863 01:10:24.863 --- 10.0.0.1 ping statistics --- 01:10:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:24.863 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:10:24.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:10:24.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 01:10:24.863 01:10:24.863 --- 10.0.0.2 ping statistics --- 01:10:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:10:24.863 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=101419 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 101419 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 101419 ']' 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:24.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:24.863 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:10:24.863 [2024-12-09 06:09:19.290422] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:10:24.863 [2024-12-09 06:09:19.291807] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:24.863 [2024-12-09 06:09:19.291902] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:10:24.863 [2024-12-09 06:09:19.444745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:25.122 [2024-12-09 06:09:19.482092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:10:25.123 [2024-12-09 06:09:19.482166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:10:25.123 [2024-12-09 06:09:19.482188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:10:25.123 [2024-12-09 06:09:19.482198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:10:25.123 [2024-12-09 06:09:19.482206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:10:25.123 [2024-12-09 06:09:19.482566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:10:25.123 [2024-12-09 06:09:19.538492] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:10:25.123 [2024-12-09 06:09:19.538891] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:10:25.123 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:25.123 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 01:10:25.123 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:10:25.123 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 01:10:25.123 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:10:25.123 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:10:25.123 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:10:25.382 [2024-12-09 06:09:19.903405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:10:25.383 ************************************ 01:10:25.383 START TEST lvs_grow_clean 01:10:25.383 ************************************ 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:10:25.383 06:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:10:25.950 06:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:10:25.950 06:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:10:25.950 06:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:25.950 06:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:25.950 06:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:10:26.518 06:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:10:26.518 06:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:10:26.518 06:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u eb527118-8d00-4eaf-9529-3f6961d9ff79 lvol 150 01:10:26.778 06:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3f42b8e1-9e87-4ed4-9445-8a0e0a48b952 01:10:26.778 06:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:10:26.778 06:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:10:26.778 [2024-12-09 06:09:21.359232] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:10:26.778 [2024-12-09 06:09:21.359415] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:10:27.036 true 01:10:27.036 06:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:27.036 06:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:10:27.296 06:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:10:27.296 06:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:10:27.555 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3f42b8e1-9e87-4ed4-9445-8a0e0a48b952 01:10:27.814 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:10:28.073 [2024-12-09 06:09:22.523727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:10:28.073 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=101572 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 101572 /var/tmp/bdevperf.sock 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 101572 ']' 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:28.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:28.333 06:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:10:28.592 [2024-12-09 06:09:22.927927] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:28.593 [2024-12-09 06:09:22.928022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101572 ] 01:10:28.593 [2024-12-09 06:09:23.078805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:28.593 [2024-12-09 06:09:23.120546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:10:29.530 06:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:29.530 06:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 01:10:29.530 06:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:10:29.789 Nvme0n1 01:10:29.789 06:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:10:30.047 [ 01:10:30.047 { 01:10:30.047 "aliases": [ 01:10:30.047 "3f42b8e1-9e87-4ed4-9445-8a0e0a48b952" 01:10:30.047 ], 01:10:30.047 "assigned_rate_limits": { 01:10:30.047 "r_mbytes_per_sec": 0, 01:10:30.047 "rw_ios_per_sec": 0, 01:10:30.048 "rw_mbytes_per_sec": 0, 01:10:30.048 "w_mbytes_per_sec": 0 01:10:30.048 }, 01:10:30.048 "block_size": 4096, 01:10:30.048 "claimed": false, 01:10:30.048 "driver_specific": { 01:10:30.048 "mp_policy": "active_passive", 01:10:30.048 "nvme": [ 01:10:30.048 { 01:10:30.048 "ctrlr_data": { 01:10:30.048 "ana_reporting": false, 01:10:30.048 "cntlid": 1, 01:10:30.048 "firmware_revision": "25.01", 01:10:30.048 "model_number": "SPDK bdev Controller", 01:10:30.048 "multi_ctrlr": true, 01:10:30.048 "oacs": { 01:10:30.048 "firmware": 0, 01:10:30.048 "format": 0, 01:10:30.048 "ns_manage": 0, 01:10:30.048 "security": 0 01:10:30.048 }, 01:10:30.048 "serial_number": "SPDK0", 01:10:30.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:30.048 "vendor_id": "0x8086" 01:10:30.048 }, 01:10:30.048 "ns_data": { 01:10:30.048 "can_share": true, 01:10:30.048 "id": 1 01:10:30.048 }, 01:10:30.048 "trid": { 01:10:30.048 "adrfam": "IPv4", 01:10:30.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:30.048 "traddr": "10.0.0.3", 01:10:30.048 "trsvcid": "4420", 01:10:30.048 "trtype": "TCP" 01:10:30.048 }, 01:10:30.048 "vs": { 01:10:30.048 "nvme_version": "1.3" 01:10:30.048 } 01:10:30.048 } 01:10:30.048 ] 01:10:30.048 }, 01:10:30.048 "memory_domains": [ 01:10:30.048 { 01:10:30.048 "dma_device_id": "system", 01:10:30.048 "dma_device_type": 1 01:10:30.048 } 01:10:30.048 ], 01:10:30.048 "name": "Nvme0n1", 01:10:30.048 "num_blocks": 38912, 01:10:30.048 "numa_id": -1, 01:10:30.048 "product_name": "NVMe disk", 01:10:30.048 "supported_io_types": { 01:10:30.048 "abort": true, 01:10:30.048 "compare": true, 01:10:30.048 "compare_and_write": true, 01:10:30.048 "copy": true, 01:10:30.048 "flush": true, 01:10:30.048 "get_zone_info": false, 01:10:30.048 "nvme_admin": true, 01:10:30.048 "nvme_io": true, 01:10:30.048 "nvme_io_md": false, 01:10:30.048 "nvme_iov_md": false, 01:10:30.048 "read": true, 01:10:30.048 "reset": true, 01:10:30.048 "seek_data": false, 01:10:30.048 "seek_hole": false, 01:10:30.048 "unmap": true, 01:10:30.048 "write": true, 01:10:30.048 "write_zeroes": true, 01:10:30.048 "zcopy": false, 01:10:30.048 "zone_append": false, 01:10:30.048 "zone_management": false 01:10:30.048 }, 01:10:30.048 "uuid": "3f42b8e1-9e87-4ed4-9445-8a0e0a48b952", 01:10:30.048 "zoned": false 01:10:30.048 } 01:10:30.048 ] 01:10:30.048 06:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=101614 01:10:30.048 06:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:10:30.048 06:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:10:30.308 Running I/O for 10 seconds... 01:10:31.308 Latency(us) 01:10:31.308 [2024-12-09T06:09:25.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:31.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:31.308 Nvme0n1 : 1.00 6319.00 24.68 0.00 0.00 0.00 0.00 0.00 01:10:31.308 [2024-12-09T06:09:25.894Z] =================================================================================================================== 01:10:31.308 [2024-12-09T06:09:25.894Z] Total : 6319.00 24.68 0.00 0.00 0.00 0.00 0.00 01:10:31.308 01:10:32.242 06:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:32.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:32.242 Nvme0n1 : 2.00 6528.50 25.50 0.00 0.00 0.00 0.00 0.00 01:10:32.242 [2024-12-09T06:09:26.828Z] =================================================================================================================== 01:10:32.242 [2024-12-09T06:09:26.828Z] Total : 6528.50 25.50 0.00 0.00 0.00 0.00 0.00 01:10:32.242 01:10:32.501 true 01:10:32.501 06:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:32.501 06:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:10:32.759 06:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:10:32.759 06:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:10:32.759 06:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 101614 01:10:33.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:33.324 Nvme0n1 : 3.00 6707.67 26.20 0.00 0.00 0.00 0.00 0.00 01:10:33.324 [2024-12-09T06:09:27.910Z] =================================================================================================================== 01:10:33.324 [2024-12-09T06:09:27.910Z] Total : 6707.67 26.20 0.00 0.00 0.00 0.00 0.00 01:10:33.324 01:10:34.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:34.262 Nvme0n1 : 4.00 6702.00 26.18 0.00 0.00 0.00 0.00 0.00 01:10:34.262 [2024-12-09T06:09:28.848Z] =================================================================================================================== 01:10:34.262 [2024-12-09T06:09:28.848Z] Total : 6702.00 26.18 0.00 0.00 0.00 0.00 0.00 01:10:34.262 01:10:35.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:35.197 Nvme0n1 : 5.00 6716.20 26.24 0.00 0.00 0.00 0.00 0.00 01:10:35.197 [2024-12-09T06:09:29.783Z] =================================================================================================================== 01:10:35.197 [2024-12-09T06:09:29.783Z] Total : 6716.20 26.24 0.00 0.00 0.00 0.00 0.00 01:10:35.197 01:10:36.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:36.140 Nvme0n1 : 6.00 6738.17 26.32 0.00 0.00 0.00 0.00 0.00 01:10:36.140 [2024-12-09T06:09:30.726Z] =================================================================================================================== 01:10:36.140 [2024-12-09T06:09:30.726Z] Total : 6738.17 26.32 0.00 0.00 0.00 0.00 0.00 01:10:36.140 01:10:37.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:37.513 Nvme0n1 : 7.00 6650.14 25.98 0.00 0.00 0.00 0.00 0.00 01:10:37.513 [2024-12-09T06:09:32.099Z] =================================================================================================================== 01:10:37.513 [2024-12-09T06:09:32.099Z] Total : 6650.14 25.98 0.00 0.00 0.00 0.00 0.00 01:10:37.513 01:10:38.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:38.449 Nvme0n1 : 8.00 6637.38 25.93 0.00 0.00 0.00 0.00 0.00 01:10:38.449 [2024-12-09T06:09:33.035Z] =================================================================================================================== 01:10:38.449 [2024-12-09T06:09:33.035Z] Total : 6637.38 25.93 0.00 0.00 0.00 0.00 0.00 01:10:38.449 01:10:39.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:39.383 Nvme0n1 : 9.00 6627.56 25.89 0.00 0.00 0.00 0.00 0.00 01:10:39.383 [2024-12-09T06:09:33.969Z] =================================================================================================================== 01:10:39.383 [2024-12-09T06:09:33.970Z] Total : 6627.56 25.89 0.00 0.00 0.00 0.00 0.00 01:10:39.384 01:10:40.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:40.321 Nvme0n1 : 10.00 6618.80 25.85 0.00 0.00 0.00 0.00 0.00 01:10:40.321 [2024-12-09T06:09:34.907Z] =================================================================================================================== 01:10:40.321 [2024-12-09T06:09:34.907Z] Total : 6618.80 25.85 0.00 0.00 0.00 0.00 0.00 01:10:40.321 01:10:40.321 01:10:40.321 Latency(us) 01:10:40.321 [2024-12-09T06:09:34.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:40.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:40.321 Nvme0n1 : 10.02 6621.25 25.86 0.00 0.00 19325.74 8757.99 106764.10 01:10:40.321 [2024-12-09T06:09:34.907Z] =================================================================================================================== 01:10:40.321 [2024-12-09T06:09:34.907Z] Total : 6621.25 25.86 0.00 0.00 19325.74 8757.99 106764.10 01:10:40.321 { 01:10:40.321 "results": [ 01:10:40.321 { 01:10:40.321 "job": "Nvme0n1", 01:10:40.321 "core_mask": "0x2", 01:10:40.321 "workload": "randwrite", 01:10:40.321 "status": "finished", 01:10:40.321 "queue_depth": 128, 01:10:40.321 "io_size": 4096, 01:10:40.321 "runtime": 10.01563, 01:10:40.321 "iops": 6621.250984710897, 01:10:40.321 "mibps": 25.86426165902694, 01:10:40.321 "io_failed": 0, 01:10:40.321 "io_timeout": 0, 01:10:40.321 "avg_latency_us": 19325.741005543703, 01:10:40.321 "min_latency_us": 8757.992727272727, 01:10:40.321 "max_latency_us": 106764.10181818182 01:10:40.321 } 01:10:40.321 ], 01:10:40.321 "core_count": 1 01:10:40.321 } 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 101572 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 101572 ']' 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 101572 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101572 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101572' 01:10:40.321 killing process with pid 101572 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 101572 01:10:40.321 Received shutdown signal, test time was about 10.000000 seconds 01:10:40.321 01:10:40.321 Latency(us) 01:10:40.321 [2024-12-09T06:09:34.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:40.321 [2024-12-09T06:09:34.907Z] =================================================================================================================== 01:10:40.321 [2024-12-09T06:09:34.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:10:40.321 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 101572 01:10:40.580 06:09:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:10:40.839 06:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:41.097 06:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:41.097 06:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:10:41.354 06:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:10:41.354 06:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 01:10:41.354 06:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:10:41.611 [2024-12-09 06:09:36.139263] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:10:41.611 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:41.611 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 01:10:41.611 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:41.612 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:41.612 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:10:41.612 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:41.612 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:10:41.612 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:41.870 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:10:41.870 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:10:41.870 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:10:41.870 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:42.128 2024/12/09 06:09:36 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:eb527118-8d00-4eaf-9529-3f6961d9ff79], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:10:42.128 request: 01:10:42.128 { 01:10:42.128 "method": "bdev_lvol_get_lvstores", 01:10:42.128 "params": { 01:10:42.128 "uuid": "eb527118-8d00-4eaf-9529-3f6961d9ff79" 01:10:42.128 } 01:10:42.128 } 01:10:42.128 Got JSON-RPC error response 01:10:42.128 GoRPCClient: error on JSON-RPC call 01:10:42.128 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 01:10:42.128 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:10:42.128 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:10:42.128 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:10:42.128 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:10:42.387 aio_bdev 01:10:42.387 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3f42b8e1-9e87-4ed4-9445-8a0e0a48b952 01:10:42.387 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3f42b8e1-9e87-4ed4-9445-8a0e0a48b952 01:10:42.387 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:10:42.387 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 01:10:42.387 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:10:42.387 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:10:42.387 06:09:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:10:42.646 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f42b8e1-9e87-4ed4-9445-8a0e0a48b952 -t 2000 01:10:42.903 [ 01:10:42.903 { 01:10:42.903 "aliases": [ 01:10:42.903 "lvs/lvol" 01:10:42.903 ], 01:10:42.903 "assigned_rate_limits": { 01:10:42.903 "r_mbytes_per_sec": 0, 01:10:42.903 "rw_ios_per_sec": 0, 01:10:42.903 "rw_mbytes_per_sec": 0, 01:10:42.903 "w_mbytes_per_sec": 0 01:10:42.903 }, 01:10:42.903 "block_size": 4096, 01:10:42.903 "claimed": false, 01:10:42.903 "driver_specific": { 01:10:42.903 "lvol": { 01:10:42.903 "base_bdev": "aio_bdev", 01:10:42.903 "clone": false, 01:10:42.903 "esnap_clone": false, 01:10:42.903 "lvol_store_uuid": "eb527118-8d00-4eaf-9529-3f6961d9ff79", 01:10:42.903 "num_allocated_clusters": 38, 01:10:42.903 "snapshot": false, 01:10:42.903 "thin_provision": false 01:10:42.903 } 01:10:42.903 }, 01:10:42.903 "name": "3f42b8e1-9e87-4ed4-9445-8a0e0a48b952", 01:10:42.903 "num_blocks": 38912, 01:10:42.903 "product_name": "Logical Volume", 01:10:42.903 "supported_io_types": { 01:10:42.903 "abort": false, 01:10:42.903 "compare": false, 01:10:42.903 "compare_and_write": false, 01:10:42.903 "copy": false, 01:10:42.903 "flush": false, 01:10:42.903 "get_zone_info": false, 01:10:42.903 "nvme_admin": false, 01:10:42.903 "nvme_io": false, 01:10:42.903 "nvme_io_md": false, 01:10:42.903 "nvme_iov_md": false, 01:10:42.903 "read": true, 01:10:42.903 "reset": true, 01:10:42.903 "seek_data": true, 01:10:42.903 "seek_hole": true, 01:10:42.903 "unmap": true, 01:10:42.903 "write": true, 01:10:42.903 "write_zeroes": true, 01:10:42.903 "zcopy": false, 01:10:42.903 "zone_append": false, 01:10:42.903 "zone_management": false 01:10:42.903 }, 01:10:42.903 "uuid": "3f42b8e1-9e87-4ed4-9445-8a0e0a48b952", 01:10:42.903 "zoned": false 01:10:42.903 } 01:10:42.903 ] 01:10:42.903 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 01:10:42.903 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:42.903 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:10:43.161 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:10:43.161 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:43.161 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:10:43.420 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:10:43.420 06:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3f42b8e1-9e87-4ed4-9445-8a0e0a48b952 01:10:43.703 06:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb527118-8d00-4eaf-9529-3f6961d9ff79 01:10:44.291 06:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:10:44.291 06:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:10:44.860 01:10:44.860 real 0m19.296s 01:10:44.860 user 0m18.937s 01:10:44.860 sys 0m2.099s 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 01:10:44.860 ************************************ 01:10:44.860 END TEST lvs_grow_clean 01:10:44.860 ************************************ 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:10:44.860 ************************************ 01:10:44.860 START TEST lvs_grow_dirty 01:10:44.860 ************************************ 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:10:44.860 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:10:45.119 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 01:10:45.119 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 01:10:45.378 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=55b92944-6f68-4a42-b817-2f1cafc88fae 01:10:45.378 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:10:45.378 06:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 01:10:45.637 06:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 01:10:45.637 06:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 01:10:45.638 06:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 55b92944-6f68-4a42-b817-2f1cafc88fae lvol 150 01:10:45.897 06:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=15c60f29-be83-43ec-9a17-871b8cd15041 01:10:45.897 06:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:10:45.897 06:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 01:10:46.156 [2024-12-09 06:09:40.735101] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 01:10:46.156 [2024-12-09 06:09:40.735204] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 01:10:46.156 true 01:10:46.415 06:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:10:46.415 06:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 01:10:46.674 06:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 01:10:46.674 06:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 01:10:46.934 06:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 15c60f29-be83-43ec-9a17-871b8cd15041 01:10:47.194 06:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:10:47.454 [2024-12-09 06:09:41.831599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:10:47.454 06:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102011 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102011 /var/tmp/bdevperf.sock 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102011 ']' 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:47.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:47.713 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:10:47.713 [2024-12-09 06:09:42.168448] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:10:47.713 [2024-12-09 06:09:42.168534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102011 ] 01:10:47.973 [2024-12-09 06:09:42.318711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:10:47.973 [2024-12-09 06:09:42.359341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:10:47.973 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:10:47.973 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:10:47.973 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 01:10:48.232 Nvme0n1 01:10:48.232 06:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 01:10:48.491 [ 01:10:48.491 { 01:10:48.491 "aliases": [ 01:10:48.491 "15c60f29-be83-43ec-9a17-871b8cd15041" 01:10:48.491 ], 01:10:48.491 "assigned_rate_limits": { 01:10:48.491 "r_mbytes_per_sec": 0, 01:10:48.491 "rw_ios_per_sec": 0, 01:10:48.491 "rw_mbytes_per_sec": 0, 01:10:48.491 "w_mbytes_per_sec": 0 01:10:48.491 }, 01:10:48.491 "block_size": 4096, 01:10:48.491 "claimed": false, 01:10:48.491 "driver_specific": { 01:10:48.491 "mp_policy": "active_passive", 01:10:48.491 "nvme": [ 01:10:48.491 { 01:10:48.491 "ctrlr_data": { 01:10:48.491 "ana_reporting": false, 01:10:48.491 "cntlid": 1, 01:10:48.491 "firmware_revision": "25.01", 01:10:48.491 "model_number": "SPDK bdev Controller", 01:10:48.491 "multi_ctrlr": true, 01:10:48.491 "oacs": { 01:10:48.491 "firmware": 0, 01:10:48.491 "format": 0, 01:10:48.491 "ns_manage": 0, 01:10:48.491 "security": 0 01:10:48.491 }, 01:10:48.491 "serial_number": "SPDK0", 01:10:48.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:48.491 "vendor_id": "0x8086" 01:10:48.491 }, 01:10:48.491 "ns_data": { 01:10:48.491 "can_share": true, 01:10:48.491 "id": 1 01:10:48.491 }, 01:10:48.491 "trid": { 01:10:48.491 "adrfam": "IPv4", 01:10:48.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:10:48.491 "traddr": "10.0.0.3", 01:10:48.491 "trsvcid": "4420", 01:10:48.491 "trtype": "TCP" 01:10:48.491 }, 01:10:48.491 "vs": { 01:10:48.491 "nvme_version": "1.3" 01:10:48.491 } 01:10:48.491 } 01:10:48.491 ] 01:10:48.491 }, 01:10:48.491 "memory_domains": [ 01:10:48.491 { 01:10:48.491 "dma_device_id": "system", 01:10:48.491 "dma_device_type": 1 01:10:48.491 } 01:10:48.491 ], 01:10:48.491 "name": "Nvme0n1", 01:10:48.491 "num_blocks": 38912, 01:10:48.491 "numa_id": -1, 01:10:48.491 "product_name": "NVMe disk", 01:10:48.491 "supported_io_types": { 01:10:48.491 "abort": true, 01:10:48.491 "compare": true, 01:10:48.491 "compare_and_write": true, 01:10:48.491 "copy": true, 01:10:48.491 "flush": true, 01:10:48.491 "get_zone_info": false, 01:10:48.491 "nvme_admin": true, 01:10:48.491 "nvme_io": true, 01:10:48.491 "nvme_io_md": false, 01:10:48.491 "nvme_iov_md": false, 01:10:48.491 "read": true, 01:10:48.491 "reset": true, 01:10:48.491 "seek_data": false, 01:10:48.491 "seek_hole": false, 01:10:48.491 "unmap": true, 01:10:48.491 "write": true, 01:10:48.491 "write_zeroes": true, 01:10:48.491 "zcopy": false, 01:10:48.491 "zone_append": false, 01:10:48.491 "zone_management": false 01:10:48.491 }, 01:10:48.491 "uuid": "15c60f29-be83-43ec-9a17-871b8cd15041", 01:10:48.491 "zoned": false 01:10:48.491 } 01:10:48.491 ] 01:10:48.491 06:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102037 01:10:48.491 06:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:10:48.491 06:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 01:10:48.750 Running I/O for 10 seconds... 01:10:49.687 Latency(us) 01:10:49.687 [2024-12-09T06:09:44.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:49.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:49.687 Nvme0n1 : 1.00 7371.00 28.79 0.00 0.00 0.00 0.00 0.00 01:10:49.687 [2024-12-09T06:09:44.273Z] =================================================================================================================== 01:10:49.687 [2024-12-09T06:09:44.273Z] Total : 7371.00 28.79 0.00 0.00 0.00 0.00 0.00 01:10:49.687 01:10:50.643 06:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:10:50.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:50.643 Nvme0n1 : 2.00 7503.00 29.31 0.00 0.00 0.00 0.00 0.00 01:10:50.643 [2024-12-09T06:09:45.229Z] =================================================================================================================== 01:10:50.643 [2024-12-09T06:09:45.229Z] Total : 7503.00 29.31 0.00 0.00 0.00 0.00 0.00 01:10:50.643 01:10:50.901 true 01:10:50.901 06:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:10:50.901 06:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 01:10:51.159 06:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 01:10:51.159 06:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 01:10:51.159 06:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 102037 01:10:51.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:51.724 Nvme0n1 : 3.00 7521.67 29.38 0.00 0.00 0.00 0.00 0.00 01:10:51.724 [2024-12-09T06:09:46.310Z] =================================================================================================================== 01:10:51.724 [2024-12-09T06:09:46.310Z] Total : 7521.67 29.38 0.00 0.00 0.00 0.00 0.00 01:10:51.724 01:10:52.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:52.658 Nvme0n1 : 4.00 7518.25 29.37 0.00 0.00 0.00 0.00 0.00 01:10:52.659 [2024-12-09T06:09:47.245Z] =================================================================================================================== 01:10:52.659 [2024-12-09T06:09:47.245Z] Total : 7518.25 29.37 0.00 0.00 0.00 0.00 0.00 01:10:52.659 01:10:53.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:53.593 Nvme0n1 : 5.00 7541.80 29.46 0.00 0.00 0.00 0.00 0.00 01:10:53.593 [2024-12-09T06:09:48.179Z] =================================================================================================================== 01:10:53.593 [2024-12-09T06:09:48.179Z] Total : 7541.80 29.46 0.00 0.00 0.00 0.00 0.00 01:10:53.593 01:10:54.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:54.968 Nvme0n1 : 6.00 7517.00 29.36 0.00 0.00 0.00 0.00 0.00 01:10:54.968 [2024-12-09T06:09:49.554Z] =================================================================================================================== 01:10:54.968 [2024-12-09T06:09:49.554Z] Total : 7517.00 29.36 0.00 0.00 0.00 0.00 0.00 01:10:54.968 01:10:55.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:55.904 Nvme0n1 : 7.00 7290.14 28.48 0.00 0.00 0.00 0.00 0.00 01:10:55.904 [2024-12-09T06:09:50.490Z] =================================================================================================================== 01:10:55.904 [2024-12-09T06:09:50.490Z] Total : 7290.14 28.48 0.00 0.00 0.00 0.00 0.00 01:10:55.904 01:10:56.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:56.875 Nvme0n1 : 8.00 7261.00 28.36 0.00 0.00 0.00 0.00 0.00 01:10:56.875 [2024-12-09T06:09:51.461Z] =================================================================================================================== 01:10:56.875 [2024-12-09T06:09:51.461Z] Total : 7261.00 28.36 0.00 0.00 0.00 0.00 0.00 01:10:56.875 01:10:57.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:57.829 Nvme0n1 : 9.00 7245.89 28.30 0.00 0.00 0.00 0.00 0.00 01:10:57.829 [2024-12-09T06:09:52.415Z] =================================================================================================================== 01:10:57.829 [2024-12-09T06:09:52.415Z] Total : 7245.89 28.30 0.00 0.00 0.00 0.00 0.00 01:10:57.829 01:10:58.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:58.766 Nvme0n1 : 10.00 7228.50 28.24 0.00 0.00 0.00 0.00 0.00 01:10:58.766 [2024-12-09T06:09:53.352Z] =================================================================================================================== 01:10:58.766 [2024-12-09T06:09:53.352Z] Total : 7228.50 28.24 0.00 0.00 0.00 0.00 0.00 01:10:58.766 01:10:58.766 01:10:58.766 Latency(us) 01:10:58.766 [2024-12-09T06:09:53.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:58.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 01:10:58.766 Nvme0n1 : 10.01 7230.95 28.25 0.00 0.00 17689.55 7298.33 217341.21 01:10:58.766 [2024-12-09T06:09:53.352Z] =================================================================================================================== 01:10:58.766 [2024-12-09T06:09:53.352Z] Total : 7230.95 28.25 0.00 0.00 17689.55 7298.33 217341.21 01:10:58.766 { 01:10:58.766 "results": [ 01:10:58.766 { 01:10:58.766 "job": "Nvme0n1", 01:10:58.766 "core_mask": "0x2", 01:10:58.766 "workload": "randwrite", 01:10:58.766 "status": "finished", 01:10:58.766 "queue_depth": 128, 01:10:58.766 "io_size": 4096, 01:10:58.766 "runtime": 10.01432, 01:10:58.766 "iops": 7230.945286349947, 01:10:58.766 "mibps": 28.24588002480448, 01:10:58.766 "io_failed": 0, 01:10:58.766 "io_timeout": 0, 01:10:58.766 "avg_latency_us": 17689.550628252335, 01:10:58.767 "min_latency_us": 7298.327272727272, 01:10:58.767 "max_latency_us": 217341.20727272728 01:10:58.767 } 01:10:58.767 ], 01:10:58.767 "core_count": 1 01:10:58.767 } 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102011 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 102011 ']' 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 102011 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102011 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:10:58.767 killing process with pid 102011 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102011' 01:10:58.767 Received shutdown signal, test time was about 10.000000 seconds 01:10:58.767 01:10:58.767 Latency(us) 01:10:58.767 [2024-12-09T06:09:53.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:10:58.767 [2024-12-09T06:09:53.353Z] =================================================================================================================== 01:10:58.767 [2024-12-09T06:09:53.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 102011 01:10:58.767 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 102011 01:10:59.025 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:10:59.284 06:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:10:59.542 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 01:10:59.542 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 101419 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 101419 01:10:59.799 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 101419 Killed "${NVMF_APP[@]}" "$@" 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=102199 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 102199 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102199 ']' 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 01:10:59.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 01:10:59.799 06:09:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:10:59.799 [2024-12-09 06:09:54.384290] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:11:00.058 [2024-12-09 06:09:54.385573] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:00.058 [2024-12-09 06:09:54.385667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:00.058 [2024-12-09 06:09:54.541812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:00.058 [2024-12-09 06:09:54.582118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:00.058 [2024-12-09 06:09:54.582198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:00.058 [2024-12-09 06:09:54.582220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:00.058 [2024-12-09 06:09:54.582230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:00.058 [2024-12-09 06:09:54.582239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:00.058 [2024-12-09 06:09:54.582612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:11:00.058 [2024-12-09 06:09:54.640175] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:11:00.058 [2024-12-09 06:09:54.640565] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:11:00.994 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:00.994 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 01:11:00.994 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:11:00.994 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 01:11:00.994 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:11:00.994 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:00.994 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:11:01.253 [2024-12-09 06:09:55.684851] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 01:11:01.253 [2024-12-09 06:09:55.685459] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 01:11:01.253 [2024-12-09 06:09:55.685765] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 01:11:01.253 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 01:11:01.253 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 15c60f29-be83-43ec-9a17-871b8cd15041 01:11:01.253 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=15c60f29-be83-43ec-9a17-871b8cd15041 01:11:01.253 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:11:01.253 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:11:01.253 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:11:01.253 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:11:01.253 06:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:11:01.511 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15c60f29-be83-43ec-9a17-871b8cd15041 -t 2000 01:11:01.770 [ 01:11:01.770 { 01:11:01.770 "aliases": [ 01:11:01.770 "lvs/lvol" 01:11:01.770 ], 01:11:01.770 "assigned_rate_limits": { 01:11:01.770 "r_mbytes_per_sec": 0, 01:11:01.770 "rw_ios_per_sec": 0, 01:11:01.770 "rw_mbytes_per_sec": 0, 01:11:01.770 "w_mbytes_per_sec": 0 01:11:01.770 }, 01:11:01.770 "block_size": 4096, 01:11:01.770 "claimed": false, 01:11:01.770 "driver_specific": { 01:11:01.770 "lvol": { 01:11:01.770 "base_bdev": "aio_bdev", 01:11:01.770 "clone": false, 01:11:01.770 "esnap_clone": false, 01:11:01.770 "lvol_store_uuid": "55b92944-6f68-4a42-b817-2f1cafc88fae", 01:11:01.770 "num_allocated_clusters": 38, 01:11:01.770 "snapshot": false, 01:11:01.770 "thin_provision": false 01:11:01.770 } 01:11:01.770 }, 01:11:01.770 "name": "15c60f29-be83-43ec-9a17-871b8cd15041", 01:11:01.770 "num_blocks": 38912, 01:11:01.770 "product_name": "Logical Volume", 01:11:01.770 "supported_io_types": { 01:11:01.770 "abort": false, 01:11:01.770 "compare": false, 01:11:01.770 "compare_and_write": false, 01:11:01.770 "copy": false, 01:11:01.770 "flush": false, 01:11:01.770 "get_zone_info": false, 01:11:01.770 "nvme_admin": false, 01:11:01.770 "nvme_io": false, 01:11:01.770 "nvme_io_md": false, 01:11:01.770 "nvme_iov_md": false, 01:11:01.770 "read": true, 01:11:01.770 "reset": true, 01:11:01.770 "seek_data": true, 01:11:01.770 "seek_hole": true, 01:11:01.770 "unmap": true, 01:11:01.770 "write": true, 01:11:01.770 "write_zeroes": true, 01:11:01.770 "zcopy": false, 01:11:01.770 "zone_append": false, 01:11:01.770 "zone_management": false 01:11:01.770 }, 01:11:01.770 "uuid": "15c60f29-be83-43ec-9a17-871b8cd15041", 01:11:01.770 "zoned": false 01:11:01.770 } 01:11:01.770 ] 01:11:01.770 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:11:01.770 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:11:01.770 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 01:11:02.029 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 01:11:02.029 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:11:02.029 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 01:11:02.288 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 01:11:02.288 06:09:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:11:02.547 [2024-12-09 06:09:57.099350] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:11:02.807 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:11:02.808 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 01:11:02.808 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:11:02.808 2024/12/09 06:09:57 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:55b92944-6f68-4a42-b817-2f1cafc88fae], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 01:11:02.808 request: 01:11:02.808 { 01:11:02.808 "method": "bdev_lvol_get_lvstores", 01:11:02.808 "params": { 01:11:02.808 "uuid": "55b92944-6f68-4a42-b817-2f1cafc88fae" 01:11:02.808 } 01:11:02.808 } 01:11:02.808 Got JSON-RPC error response 01:11:02.808 GoRPCClient: error on JSON-RPC call 01:11:03.067 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 01:11:03.067 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:11:03.067 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:11:03.067 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:11:03.067 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 01:11:03.327 aio_bdev 01:11:03.327 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 15c60f29-be83-43ec-9a17-871b8cd15041 01:11:03.327 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=15c60f29-be83-43ec-9a17-871b8cd15041 01:11:03.327 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 01:11:03.327 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 01:11:03.327 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 01:11:03.327 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 01:11:03.327 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 01:11:03.586 06:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 15c60f29-be83-43ec-9a17-871b8cd15041 -t 2000 01:11:03.845 [ 01:11:03.845 { 01:11:03.845 "aliases": [ 01:11:03.845 "lvs/lvol" 01:11:03.845 ], 01:11:03.845 "assigned_rate_limits": { 01:11:03.845 "r_mbytes_per_sec": 0, 01:11:03.845 "rw_ios_per_sec": 0, 01:11:03.845 "rw_mbytes_per_sec": 0, 01:11:03.845 "w_mbytes_per_sec": 0 01:11:03.845 }, 01:11:03.845 "block_size": 4096, 01:11:03.845 "claimed": false, 01:11:03.845 "driver_specific": { 01:11:03.845 "lvol": { 01:11:03.845 "base_bdev": "aio_bdev", 01:11:03.845 "clone": false, 01:11:03.845 "esnap_clone": false, 01:11:03.845 "lvol_store_uuid": "55b92944-6f68-4a42-b817-2f1cafc88fae", 01:11:03.845 "num_allocated_clusters": 38, 01:11:03.845 "snapshot": false, 01:11:03.845 "thin_provision": false 01:11:03.845 } 01:11:03.845 }, 01:11:03.845 "name": "15c60f29-be83-43ec-9a17-871b8cd15041", 01:11:03.845 "num_blocks": 38912, 01:11:03.845 "product_name": "Logical Volume", 01:11:03.845 "supported_io_types": { 01:11:03.845 "abort": false, 01:11:03.845 "compare": false, 01:11:03.845 "compare_and_write": false, 01:11:03.845 "copy": false, 01:11:03.845 "flush": false, 01:11:03.845 "get_zone_info": false, 01:11:03.845 "nvme_admin": false, 01:11:03.845 "nvme_io": false, 01:11:03.845 "nvme_io_md": false, 01:11:03.845 "nvme_iov_md": false, 01:11:03.845 "read": true, 01:11:03.845 "reset": true, 01:11:03.845 "seek_data": true, 01:11:03.845 "seek_hole": true, 01:11:03.845 "unmap": true, 01:11:03.845 "write": true, 01:11:03.845 "write_zeroes": true, 01:11:03.845 "zcopy": false, 01:11:03.845 "zone_append": false, 01:11:03.845 "zone_management": false 01:11:03.845 }, 01:11:03.845 "uuid": "15c60f29-be83-43ec-9a17-871b8cd15041", 01:11:03.845 "zoned": false 01:11:03.845 } 01:11:03.845 ] 01:11:03.845 06:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 01:11:03.845 06:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 01:11:03.845 06:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:11:04.104 06:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 01:11:04.104 06:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 01:11:04.104 06:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:11:04.362 06:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 01:11:04.362 06:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 15c60f29-be83-43ec-9a17-871b8cd15041 01:11:04.621 06:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55b92944-6f68-4a42-b817-2f1cafc88fae 01:11:04.900 06:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 01:11:05.466 06:09:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 01:11:05.724 ************************************ 01:11:05.724 END TEST lvs_grow_dirty 01:11:05.724 ************************************ 01:11:05.724 01:11:05.724 real 0m20.930s 01:11:05.724 user 0m27.964s 01:11:05.724 sys 0m8.174s 01:11:05.724 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:05.724 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 01:11:05.724 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 01:11:05.724 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 01:11:05.725 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 01:11:05.725 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 01:11:05.725 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 01:11:05.725 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 01:11:05.725 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 01:11:05.725 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 01:11:05.725 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 01:11:05.725 nvmf_trace.0 01:11:05.982 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 01:11:05.982 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 01:11:05.982 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 01:11:05.982 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:11:06.547 rmmod nvme_tcp 01:11:06.547 rmmod nvme_fabrics 01:11:06.547 rmmod nvme_keyring 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 102199 ']' 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 102199 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 102199 ']' 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 102199 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102199 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:11:06.547 killing process with pid 102199 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102199' 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 102199 01:11:06.547 06:10:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 102199 01:11:06.547 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:11:06.547 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:11:06.547 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:11:06.547 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 01:11:06.547 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 01:11:06.547 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:11:06.547 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 01:11:06.805 01:11:06.805 real 0m42.743s 01:11:06.805 user 0m48.208s 01:11:06.805 sys 0m11.429s 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:06.805 ************************************ 01:11:06.805 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 01:11:06.805 END TEST nvmf_lvs_grow 01:11:06.805 ************************************ 01:11:07.089 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 01:11:07.089 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:11:07.089 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:07.089 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:11:07.090 ************************************ 01:11:07.090 START TEST nvmf_bdev_io_wait 01:11:07.090 ************************************ 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 01:11:07.090 * Looking for test storage... 01:11:07.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:11:07.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:07.090 --rc genhtml_branch_coverage=1 01:11:07.090 --rc genhtml_function_coverage=1 01:11:07.090 --rc genhtml_legend=1 01:11:07.090 --rc geninfo_all_blocks=1 01:11:07.090 --rc geninfo_unexecuted_blocks=1 01:11:07.090 01:11:07.090 ' 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:11:07.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:07.090 --rc genhtml_branch_coverage=1 01:11:07.090 --rc genhtml_function_coverage=1 01:11:07.090 --rc genhtml_legend=1 01:11:07.090 --rc geninfo_all_blocks=1 01:11:07.090 --rc geninfo_unexecuted_blocks=1 01:11:07.090 01:11:07.090 ' 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:11:07.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:07.090 --rc genhtml_branch_coverage=1 01:11:07.090 --rc genhtml_function_coverage=1 01:11:07.090 --rc genhtml_legend=1 01:11:07.090 --rc geninfo_all_blocks=1 01:11:07.090 --rc geninfo_unexecuted_blocks=1 01:11:07.090 01:11:07.090 ' 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:11:07.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:07.090 --rc genhtml_branch_coverage=1 01:11:07.090 --rc genhtml_function_coverage=1 01:11:07.090 --rc genhtml_legend=1 01:11:07.090 --rc geninfo_all_blocks=1 01:11:07.090 --rc geninfo_unexecuted_blocks=1 01:11:07.090 01:11:07.090 ' 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 01:11:07.090 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:11:07.091 Cannot find device "nvmf_init_br" 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 01:11:07.091 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:11:07.348 Cannot find device "nvmf_init_br2" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:11:07.348 Cannot find device "nvmf_tgt_br" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:11:07.348 Cannot find device "nvmf_tgt_br2" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:11:07.348 Cannot find device "nvmf_init_br" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:11:07.348 Cannot find device "nvmf_init_br2" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:11:07.348 Cannot find device "nvmf_tgt_br" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:11:07.348 Cannot find device "nvmf_tgt_br2" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:11:07.348 Cannot find device "nvmf_br" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:11:07.348 Cannot find device "nvmf_init_if" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:11:07.348 Cannot find device "nvmf_init_if2" 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:07.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:07.348 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:11:07.348 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:11:07.606 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:11:07.606 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:11:07.606 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:11:07.606 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:11:07.606 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:11:07.606 06:10:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:11:07.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:11:07.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 01:11:07.606 01:11:07.606 --- 10.0.0.3 ping statistics --- 01:11:07.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:07.606 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:11:07.606 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:11:07.606 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 01:11:07.606 01:11:07.606 --- 10.0.0.4 ping statistics --- 01:11:07.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:07.606 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:11:07.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:07.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:11:07.606 01:11:07.606 --- 10.0.0.1 ping statistics --- 01:11:07.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:07.606 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:11:07.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:07.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 01:11:07.606 01:11:07.606 --- 10.0.0.2 ping statistics --- 01:11:07.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:07.606 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:11:07.606 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=102666 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 102666 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 102666 ']' 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:07.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:07.607 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:07.607 [2024-12-09 06:10:02.163246] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:11:07.607 [2024-12-09 06:10:02.165309] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:07.607 [2024-12-09 06:10:02.165429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:07.865 [2024-12-09 06:10:02.332223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:11:07.865 [2024-12-09 06:10:02.373080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:07.865 [2024-12-09 06:10:02.373154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:07.865 [2024-12-09 06:10:02.373175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:07.865 [2024-12-09 06:10:02.373185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:07.865 [2024-12-09 06:10:02.373193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:07.865 [2024-12-09 06:10:02.374134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:07.865 [2024-12-09 06:10:02.374274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:11:07.865 [2024-12-09 06:10:02.374627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:11:07.865 [2024-12-09 06:10:02.374633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:11:07.865 [2024-12-09 06:10:02.375197] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:11:07.865 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:07.865 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 01:11:07.865 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:11:07.865 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 01:11:07.865 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:08.123 [2024-12-09 06:10:02.519497] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:11:08.123 [2024-12-09 06:10:02.519703] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:11:08.123 [2024-12-09 06:10:02.520116] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:11:08.123 [2024-12-09 06:10:02.520355] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:08.123 [2024-12-09 06:10:02.527432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:08.123 Malloc0 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:08.123 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:08.123 [2024-12-09 06:10:02.583777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=102706 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=102708 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:08.124 { 01:11:08.124 "params": { 01:11:08.124 "name": "Nvme$subsystem", 01:11:08.124 "trtype": "$TEST_TRANSPORT", 01:11:08.124 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:08.124 "adrfam": "ipv4", 01:11:08.124 "trsvcid": "$NVMF_PORT", 01:11:08.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:08.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:08.124 "hdgst": ${hdgst:-false}, 01:11:08.124 "ddgst": ${ddgst:-false} 01:11:08.124 }, 01:11:08.124 "method": "bdev_nvme_attach_controller" 01:11:08.124 } 01:11:08.124 EOF 01:11:08.124 )") 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=102710 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:08.124 { 01:11:08.124 "params": { 01:11:08.124 "name": "Nvme$subsystem", 01:11:08.124 "trtype": "$TEST_TRANSPORT", 01:11:08.124 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:08.124 "adrfam": "ipv4", 01:11:08.124 "trsvcid": "$NVMF_PORT", 01:11:08.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:08.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:08.124 "hdgst": ${hdgst:-false}, 01:11:08.124 "ddgst": ${ddgst:-false} 01:11:08.124 }, 01:11:08.124 "method": "bdev_nvme_attach_controller" 01:11:08.124 } 01:11:08.124 EOF 01:11:08.124 )") 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=102713 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:08.124 { 01:11:08.124 "params": { 01:11:08.124 "name": "Nvme$subsystem", 01:11:08.124 "trtype": "$TEST_TRANSPORT", 01:11:08.124 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:08.124 "adrfam": "ipv4", 01:11:08.124 "trsvcid": "$NVMF_PORT", 01:11:08.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:08.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:08.124 "hdgst": ${hdgst:-false}, 01:11:08.124 "ddgst": ${ddgst:-false} 01:11:08.124 }, 01:11:08.124 "method": "bdev_nvme_attach_controller" 01:11:08.124 } 01:11:08.124 EOF 01:11:08.124 )") 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:08.124 "params": { 01:11:08.124 "name": "Nvme1", 01:11:08.124 "trtype": "tcp", 01:11:08.124 "traddr": "10.0.0.3", 01:11:08.124 "adrfam": "ipv4", 01:11:08.124 "trsvcid": "4420", 01:11:08.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:08.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:08.124 "hdgst": false, 01:11:08.124 "ddgst": false 01:11:08.124 }, 01:11:08.124 "method": "bdev_nvme_attach_controller" 01:11:08.124 }' 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:08.124 "params": { 01:11:08.124 "name": "Nvme1", 01:11:08.124 "trtype": "tcp", 01:11:08.124 "traddr": "10.0.0.3", 01:11:08.124 "adrfam": "ipv4", 01:11:08.124 "trsvcid": "4420", 01:11:08.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:08.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:08.124 "hdgst": false, 01:11:08.124 "ddgst": false 01:11:08.124 }, 01:11:08.124 "method": "bdev_nvme_attach_controller" 01:11:08.124 }' 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:08.124 "params": { 01:11:08.124 "name": "Nvme1", 01:11:08.124 "trtype": "tcp", 01:11:08.124 "traddr": "10.0.0.3", 01:11:08.124 "adrfam": "ipv4", 01:11:08.124 "trsvcid": "4420", 01:11:08.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:08.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:08.124 "hdgst": false, 01:11:08.124 "ddgst": false 01:11:08.124 }, 01:11:08.124 "method": "bdev_nvme_attach_controller" 01:11:08.124 }' 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:08.124 { 01:11:08.124 "params": { 01:11:08.124 "name": "Nvme$subsystem", 01:11:08.124 "trtype": "$TEST_TRANSPORT", 01:11:08.124 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:08.124 "adrfam": "ipv4", 01:11:08.124 "trsvcid": "$NVMF_PORT", 01:11:08.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:08.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:08.124 "hdgst": ${hdgst:-false}, 01:11:08.124 "ddgst": ${ddgst:-false} 01:11:08.124 }, 01:11:08.124 "method": "bdev_nvme_attach_controller" 01:11:08.124 } 01:11:08.124 EOF 01:11:08.124 )") 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 01:11:08.124 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:08.124 "params": { 01:11:08.124 "name": "Nvme1", 01:11:08.124 "trtype": "tcp", 01:11:08.124 "traddr": "10.0.0.3", 01:11:08.124 "adrfam": "ipv4", 01:11:08.124 "trsvcid": "4420", 01:11:08.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:08.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:08.125 "hdgst": false, 01:11:08.125 "ddgst": false 01:11:08.125 }, 01:11:08.125 "method": "bdev_nvme_attach_controller" 01:11:08.125 }' 01:11:08.125 [2024-12-09 06:10:02.648396] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:08.125 [2024-12-09 06:10:02.648483] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 01:11:08.125 06:10:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 102706 01:11:08.125 [2024-12-09 06:10:02.661183] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:08.125 [2024-12-09 06:10:02.661274] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 01:11:08.125 [2024-12-09 06:10:02.670766] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:08.125 [2024-12-09 06:10:02.671012] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 01:11:08.125 [2024-12-09 06:10:02.689932] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:08.125 [2024-12-09 06:10:02.690023] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 01:11:08.382 [2024-12-09 06:10:02.836371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:08.382 [2024-12-09 06:10:02.867404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:11:08.382 [2024-12-09 06:10:02.875259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:08.382 [2024-12-09 06:10:02.905857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:11:08.382 [2024-12-09 06:10:02.921348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:08.382 [2024-12-09 06:10:02.952094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:11:08.382 [2024-12-09 06:10:02.964747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:08.638 Running I/O for 1 seconds... 01:11:08.638 [2024-12-09 06:10:02.990591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 01:11:08.638 Running I/O for 1 seconds... 01:11:08.638 Running I/O for 1 seconds... 01:11:08.638 Running I/O for 1 seconds... 01:11:09.572 6269.00 IOPS, 24.49 MiB/s 01:11:09.572 Latency(us) 01:11:09.572 [2024-12-09T06:10:04.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:09.572 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 01:11:09.572 Nvme1n1 : 1.02 6284.48 24.55 0.00 0.00 20119.16 4230.05 30384.87 01:11:09.572 [2024-12-09T06:10:04.158Z] =================================================================================================================== 01:11:09.572 [2024-12-09T06:10:04.158Z] Total : 6284.48 24.55 0.00 0.00 20119.16 4230.05 30384.87 01:11:09.572 8755.00 IOPS, 34.20 MiB/s 01:11:09.572 Latency(us) 01:11:09.572 [2024-12-09T06:10:04.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:09.572 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 01:11:09.572 Nvme1n1 : 1.01 8825.74 34.48 0.00 0.00 14440.91 2606.55 19541.64 01:11:09.572 [2024-12-09T06:10:04.158Z] =================================================================================================================== 01:11:09.572 [2024-12-09T06:10:04.158Z] Total : 8825.74 34.48 0.00 0.00 14440.91 2606.55 19541.64 01:11:09.572 177368.00 IOPS, 692.84 MiB/s 01:11:09.572 Latency(us) 01:11:09.572 [2024-12-09T06:10:04.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:09.572 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 01:11:09.572 Nvme1n1 : 1.00 177020.72 691.49 0.00 0.00 719.11 310.92 1951.19 01:11:09.572 [2024-12-09T06:10:04.158Z] =================================================================================================================== 01:11:09.572 [2024-12-09T06:10:04.158Z] Total : 177020.72 691.49 0.00 0.00 719.11 310.92 1951.19 01:11:09.572 6665.00 IOPS, 26.04 MiB/s 01:11:09.572 Latency(us) 01:11:09.572 [2024-12-09T06:10:04.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:09.572 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 01:11:09.572 Nvme1n1 : 1.01 6781.08 26.49 0.00 0.00 18854.29 6106.76 40513.16 01:11:09.572 [2024-12-09T06:10:04.158Z] =================================================================================================================== 01:11:09.572 [2024-12-09T06:10:04.158Z] Total : 6781.08 26.49 0.00 0.00 18854.29 6106.76 40513.16 01:11:09.572 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 102708 01:11:09.572 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 102710 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 102713 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:11:09.832 rmmod nvme_tcp 01:11:09.832 rmmod nvme_fabrics 01:11:09.832 rmmod nvme_keyring 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 102666 ']' 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 102666 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 102666 ']' 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 102666 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102666 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102666' 01:11:09.832 killing process with pid 102666 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 102666 01:11:09.832 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 102666 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:10.096 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 01:11:10.359 ************************************ 01:11:10.359 END TEST nvmf_bdev_io_wait 01:11:10.359 ************************************ 01:11:10.359 01:11:10.359 real 0m3.302s 01:11:10.359 user 0m11.799s 01:11:10.359 sys 0m2.179s 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:11:10.359 ************************************ 01:11:10.359 START TEST nvmf_queue_depth 01:11:10.359 ************************************ 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 01:11:10.359 * Looking for test storage... 01:11:10.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 01:11:10.359 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 01:11:10.360 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:11:10.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:10.619 --rc genhtml_branch_coverage=1 01:11:10.619 --rc genhtml_function_coverage=1 01:11:10.619 --rc genhtml_legend=1 01:11:10.619 --rc geninfo_all_blocks=1 01:11:10.619 --rc geninfo_unexecuted_blocks=1 01:11:10.619 01:11:10.619 ' 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:11:10.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:10.619 --rc genhtml_branch_coverage=1 01:11:10.619 --rc genhtml_function_coverage=1 01:11:10.619 --rc genhtml_legend=1 01:11:10.619 --rc geninfo_all_blocks=1 01:11:10.619 --rc geninfo_unexecuted_blocks=1 01:11:10.619 01:11:10.619 ' 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:11:10.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:10.619 --rc genhtml_branch_coverage=1 01:11:10.619 --rc genhtml_function_coverage=1 01:11:10.619 --rc genhtml_legend=1 01:11:10.619 --rc geninfo_all_blocks=1 01:11:10.619 --rc geninfo_unexecuted_blocks=1 01:11:10.619 01:11:10.619 ' 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:11:10.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:10.619 --rc genhtml_branch_coverage=1 01:11:10.619 --rc genhtml_function_coverage=1 01:11:10.619 --rc genhtml_legend=1 01:11:10.619 --rc geninfo_all_blocks=1 01:11:10.619 --rc geninfo_unexecuted_blocks=1 01:11:10.619 01:11:10.619 ' 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:11:10.619 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:11:10.620 Cannot find device "nvmf_init_br" 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 01:11:10.620 06:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:11:10.620 Cannot find device "nvmf_init_br2" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:11:10.620 Cannot find device "nvmf_tgt_br" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:11:10.620 Cannot find device "nvmf_tgt_br2" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:11:10.620 Cannot find device "nvmf_init_br" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:11:10.620 Cannot find device "nvmf_init_br2" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:11:10.620 Cannot find device "nvmf_tgt_br" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:11:10.620 Cannot find device "nvmf_tgt_br2" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:11:10.620 Cannot find device "nvmf_br" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:11:10.620 Cannot find device "nvmf_init_if" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:11:10.620 Cannot find device "nvmf_init_if2" 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:10.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:10.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:11:10.620 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:11:10.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:11:10.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 01:11:10.879 01:11:10.879 --- 10.0.0.3 ping statistics --- 01:11:10.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:10.879 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:11:10.879 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:11:10.879 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 01:11:10.879 01:11:10.879 --- 10.0.0.4 ping statistics --- 01:11:10.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:10.879 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:11:10.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:10.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:11:10.879 01:11:10.879 --- 10.0.0.1 ping statistics --- 01:11:10.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:10.879 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:11:10.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:10.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 01:11:10.879 01:11:10.879 --- 10.0.0.2 ping statistics --- 01:11:10.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:10.879 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=102969 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 102969 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 102969 ']' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:10.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:10.879 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:10.879 [2024-12-09 06:10:05.439104] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:11:10.879 [2024-12-09 06:10:05.440384] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:10.879 [2024-12-09 06:10:05.440493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:11.137 [2024-12-09 06:10:05.597379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:11.137 [2024-12-09 06:10:05.635159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:11.137 [2024-12-09 06:10:05.635225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:11.137 [2024-12-09 06:10:05.635251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:11.137 [2024-12-09 06:10:05.635261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:11.137 [2024-12-09 06:10:05.635269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:11.137 [2024-12-09 06:10:05.635631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:11.137 [2024-12-09 06:10:05.692049] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:11:11.137 [2024-12-09 06:10:05.692420] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.395 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:11.395 [2024-12-09 06:10:05.780486] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:11.396 Malloc0 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:11.396 [2024-12-09 06:10:05.836510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103007 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103007 /var/tmp/bdevperf.sock 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103007 ']' 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 01:11:11.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:11.396 06:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:11.396 [2024-12-09 06:10:05.901795] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:11.396 [2024-12-09 06:10:05.901898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103007 ] 01:11:11.655 [2024-12-09 06:10:06.054913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:11.655 [2024-12-09 06:10:06.095854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:11:11.655 06:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:11.655 06:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 01:11:11.655 06:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 01:11:11.655 06:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:11.655 06:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:11.914 NVMe0n1 01:11:11.914 06:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:11.914 06:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 01:11:11.914 Running I/O for 10 seconds... 01:11:13.788 8192.00 IOPS, 32.00 MiB/s [2024-12-09T06:10:09.752Z] 8484.00 IOPS, 33.14 MiB/s [2024-12-09T06:10:10.689Z] 8573.67 IOPS, 33.49 MiB/s [2024-12-09T06:10:11.625Z] 8789.50 IOPS, 34.33 MiB/s [2024-12-09T06:10:12.561Z] 8933.20 IOPS, 34.90 MiB/s [2024-12-09T06:10:13.496Z] 9049.17 IOPS, 35.35 MiB/s [2024-12-09T06:10:14.428Z] 9112.71 IOPS, 35.60 MiB/s [2024-12-09T06:10:15.800Z] 9176.12 IOPS, 35.84 MiB/s [2024-12-09T06:10:16.375Z] 9244.56 IOPS, 36.11 MiB/s [2024-12-09T06:10:16.633Z] 9290.20 IOPS, 36.29 MiB/s 01:11:22.047 Latency(us) 01:11:22.047 [2024-12-09T06:10:16.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:22.047 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 01:11:22.047 Verification LBA range: start 0x0 length 0x4000 01:11:22.047 NVMe0n1 : 10.08 9308.58 36.36 0.00 0.00 109470.07 25499.46 78166.57 01:11:22.047 [2024-12-09T06:10:16.633Z] =================================================================================================================== 01:11:22.047 [2024-12-09T06:10:16.633Z] Total : 9308.58 36.36 0.00 0.00 109470.07 25499.46 78166.57 01:11:22.047 { 01:11:22.047 "results": [ 01:11:22.047 { 01:11:22.047 "job": "NVMe0n1", 01:11:22.047 "core_mask": "0x1", 01:11:22.047 "workload": "verify", 01:11:22.047 "status": "finished", 01:11:22.047 "verify_range": { 01:11:22.047 "start": 0, 01:11:22.047 "length": 16384 01:11:22.047 }, 01:11:22.047 "queue_depth": 1024, 01:11:22.047 "io_size": 4096, 01:11:22.047 "runtime": 10.082744, 01:11:22.047 "iops": 9308.577109564618, 01:11:22.047 "mibps": 36.36162933423679, 01:11:22.047 "io_failed": 0, 01:11:22.047 "io_timeout": 0, 01:11:22.047 "avg_latency_us": 109470.07014102842, 01:11:22.047 "min_latency_us": 25499.46181818182, 01:11:22.047 "max_latency_us": 78166.57454545454 01:11:22.047 } 01:11:22.047 ], 01:11:22.047 "core_count": 1 01:11:22.047 } 01:11:22.047 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103007 01:11:22.047 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103007 ']' 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103007 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103007 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:11:22.048 killing process with pid 103007 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103007' 01:11:22.048 Received shutdown signal, test time was about 10.000000 seconds 01:11:22.048 01:11:22.048 Latency(us) 01:11:22.048 [2024-12-09T06:10:16.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:22.048 [2024-12-09T06:10:16.634Z] =================================================================================================================== 01:11:22.048 [2024-12-09T06:10:16.634Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103007 01:11:22.048 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103007 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:11:22.305 rmmod nvme_tcp 01:11:22.305 rmmod nvme_fabrics 01:11:22.305 rmmod nvme_keyring 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 102969 ']' 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 102969 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 102969 ']' 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 102969 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102969 01:11:22.305 killing process with pid 102969 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102969' 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 102969 01:11:22.305 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 102969 01:11:22.562 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:11:22.562 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:11:22.562 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:11:22.562 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 01:11:22.562 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 01:11:22.562 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:11:22.563 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 01:11:22.563 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:11:22.563 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:11:22.563 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:11:22.563 06:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:11:22.563 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 01:11:22.821 01:11:22.821 real 0m12.469s 01:11:22.821 user 0m20.793s 01:11:22.821 sys 0m2.167s 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:22.821 ************************************ 01:11:22.821 END TEST nvmf_queue_depth 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 01:11:22.821 ************************************ 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:11:22.821 ************************************ 01:11:22.821 START TEST nvmf_target_multipath 01:11:22.821 ************************************ 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 01:11:22.821 * Looking for test storage... 01:11:22.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 01:11:22.821 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:11:23.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:23.081 --rc genhtml_branch_coverage=1 01:11:23.081 --rc genhtml_function_coverage=1 01:11:23.081 --rc genhtml_legend=1 01:11:23.081 --rc geninfo_all_blocks=1 01:11:23.081 --rc geninfo_unexecuted_blocks=1 01:11:23.081 01:11:23.081 ' 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:11:23.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:23.081 --rc genhtml_branch_coverage=1 01:11:23.081 --rc genhtml_function_coverage=1 01:11:23.081 --rc genhtml_legend=1 01:11:23.081 --rc geninfo_all_blocks=1 01:11:23.081 --rc geninfo_unexecuted_blocks=1 01:11:23.081 01:11:23.081 ' 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:11:23.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:23.081 --rc genhtml_branch_coverage=1 01:11:23.081 --rc genhtml_function_coverage=1 01:11:23.081 --rc genhtml_legend=1 01:11:23.081 --rc geninfo_all_blocks=1 01:11:23.081 --rc geninfo_unexecuted_blocks=1 01:11:23.081 01:11:23.081 ' 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:11:23.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:23.081 --rc genhtml_branch_coverage=1 01:11:23.081 --rc genhtml_function_coverage=1 01:11:23.081 --rc genhtml_legend=1 01:11:23.081 --rc geninfo_all_blocks=1 01:11:23.081 --rc geninfo_unexecuted_blocks=1 01:11:23.081 01:11:23.081 ' 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:23.081 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:11:23.082 Cannot find device "nvmf_init_br" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:11:23.082 Cannot find device "nvmf_init_br2" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:11:23.082 Cannot find device "nvmf_tgt_br" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:11:23.082 Cannot find device "nvmf_tgt_br2" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:11:23.082 Cannot find device "nvmf_init_br" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:11:23.082 Cannot find device "nvmf_init_br2" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:11:23.082 Cannot find device "nvmf_tgt_br" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:11:23.082 Cannot find device "nvmf_tgt_br2" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:11:23.082 Cannot find device "nvmf_br" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:11:23.082 Cannot find device "nvmf_init_if" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:11:23.082 Cannot find device "nvmf_init_if2" 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:23.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:23.082 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:11:23.082 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:11:23.341 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:11:23.342 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:11:23.342 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 01:11:23.342 01:11:23.342 --- 10.0.0.3 ping statistics --- 01:11:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:23.342 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:11:23.342 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:11:23.342 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 01:11:23.342 01:11:23.342 --- 10.0.0.4 ping statistics --- 01:11:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:23.342 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:11:23.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:23.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 01:11:23.342 01:11:23.342 --- 10.0.0.1 ping statistics --- 01:11:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:23.342 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:11:23.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:23.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 01:11:23.342 01:11:23.342 --- 10.0.0.2 ping statistics --- 01:11:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:23.342 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=103369 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 103369 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 103369 ']' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:23.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:23.342 06:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:11:23.601 [2024-12-09 06:10:17.974152] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:11:23.601 [2024-12-09 06:10:17.975209] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:23.601 [2024-12-09 06:10:17.975284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:23.601 [2024-12-09 06:10:18.123267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:11:23.601 [2024-12-09 06:10:18.165679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:23.601 [2024-12-09 06:10:18.165733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:23.601 [2024-12-09 06:10:18.165747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:23.601 [2024-12-09 06:10:18.165758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:23.601 [2024-12-09 06:10:18.165766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:23.601 [2024-12-09 06:10:18.166705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:23.601 [2024-12-09 06:10:18.166798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:11:23.601 [2024-12-09 06:10:18.166875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:11:23.601 [2024-12-09 06:10:18.166876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:11:23.860 [2024-12-09 06:10:18.229375] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:11:23.860 [2024-12-09 06:10:18.229580] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:11:23.860 [2024-12-09 06:10:18.230457] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:11:23.860 [2024-12-09 06:10:18.230480] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:11:23.860 [2024-12-09 06:10:18.230556] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:11:24.430 06:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:24.430 06:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 01:11:24.430 06:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:11:24.430 06:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 01:11:24.430 06:10:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:11:24.689 06:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:24.689 06:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:11:24.949 [2024-12-09 06:10:19.320202] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:24.949 06:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 01:11:25.208 Malloc0 01:11:25.208 06:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 01:11:25.468 06:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:11:25.727 06:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:11:25.986 [2024-12-09 06:10:20.516205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:11:25.986 06:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 01:11:26.245 [2024-12-09 06:10:20.780134] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 01:11:26.245 06:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 01:11:26.505 06:10:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 01:11:26.505 06:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 01:11:26.505 06:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 01:11:26.505 06:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:11:26.505 06:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:11:26.505 06:10:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=103507 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:11:29.043 06:10:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 01:11:29.043 [global] 01:11:29.043 thread=1 01:11:29.043 invalidate=1 01:11:29.043 rw=randrw 01:11:29.043 time_based=1 01:11:29.043 runtime=6 01:11:29.043 ioengine=libaio 01:11:29.043 direct=1 01:11:29.043 bs=4096 01:11:29.043 iodepth=128 01:11:29.043 norandommap=0 01:11:29.043 numjobs=1 01:11:29.043 01:11:29.043 verify_dump=1 01:11:29.043 verify_backlog=512 01:11:29.043 verify_state_save=0 01:11:29.043 do_verify=1 01:11:29.043 verify=crc32c-intel 01:11:29.043 [job0] 01:11:29.043 filename=/dev/nvme0n1 01:11:29.043 Could not set queue depth (nvme0n1) 01:11:29.043 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:29.043 fio-3.35 01:11:29.043 Starting 1 thread 01:11:29.661 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:11:29.920 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:11:30.487 06:10:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:11:31.420 06:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:11:31.420 06:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:31.420 06:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:11:31.420 06:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:11:31.679 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:11:31.937 06:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:11:32.908 06:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:11:32.908 06:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:32.908 06:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:11:32.908 06:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 103507 01:11:34.810 01:11:34.810 job0: (groupid=0, jobs=1): err= 0: pid=103529: Mon Dec 9 06:10:29 2024 01:11:34.810 read: IOPS=10.8k, BW=42.1MiB/s (44.2MB/s)(253MiB/6007msec) 01:11:34.810 slat (usec): min=2, max=6770, avg=53.67, stdev=267.41 01:11:34.810 clat (usec): min=335, max=15840, avg=7946.71, stdev=1351.08 01:11:34.810 lat (usec): min=354, max=15854, avg=8000.38, stdev=1364.69 01:11:34.810 clat percentiles (usec): 01:11:34.810 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6521], 20.00th=[ 7046], 01:11:34.810 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8094], 01:11:34.810 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9503], 95.00th=[10421], 01:11:34.810 | 99.00th=[11994], 99.50th=[12780], 99.90th=[13698], 99.95th=[13829], 01:11:34.810 | 99.99th=[14484] 01:11:34.810 bw ( KiB/s): min= 7744, max=27776, per=53.07%, avg=22887.27, stdev=6202.38, samples=11 01:11:34.810 iops : min= 1936, max= 6944, avg=5721.82, stdev=1550.60, samples=11 01:11:34.810 write: IOPS=6264, BW=24.5MiB/s (25.7MB/s)(134MiB/5470msec); 0 zone resets 01:11:34.810 slat (usec): min=4, max=7382, avg=64.36, stdev=169.51 01:11:34.810 clat (usec): min=313, max=14468, avg=7201.03, stdev=1057.57 01:11:34.810 lat (usec): min=345, max=14495, avg=7265.38, stdev=1061.57 01:11:34.810 clat percentiles (usec): 01:11:34.810 | 1.00th=[ 3687], 5.00th=[ 5407], 10.00th=[ 6194], 20.00th=[ 6652], 01:11:34.810 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 01:11:34.810 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8356], 01:11:34.810 | 99.00th=[10814], 99.50th=[11731], 99.90th=[13566], 99.95th=[13698], 01:11:34.810 | 99.99th=[14484] 01:11:34.810 bw ( KiB/s): min= 8192, max=26968, per=91.33%, avg=22885.82, stdev=6029.29, samples=11 01:11:34.810 iops : min= 2048, max= 6742, avg=5721.45, stdev=1507.32, samples=11 01:11:34.810 lat (usec) : 500=0.01%, 750=0.01% 01:11:34.810 lat (msec) : 2=0.01%, 4=0.74%, 10=94.06%, 20=5.18% 01:11:34.810 cpu : usr=4.80%, sys=21.56%, ctx=7231, majf=0, minf=90 01:11:34.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 01:11:34.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:34.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:34.810 issued rwts: total=64761,34267,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:34.810 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:34.810 01:11:34.810 Run status group 0 (all jobs): 01:11:34.810 READ: bw=42.1MiB/s (44.2MB/s), 42.1MiB/s-42.1MiB/s (44.2MB/s-44.2MB/s), io=253MiB (265MB), run=6007-6007msec 01:11:34.810 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=134MiB (140MB), run=5470-5470msec 01:11:34.810 01:11:34.810 Disk stats (read/write): 01:11:34.810 nvme0n1: ios=63819/33660, merge=0/0, ticks=476016/230622, in_queue=706638, util=98.56% 01:11:34.810 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 01:11:35.377 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 01:11:35.637 06:10:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:11:36.573 06:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:11:36.573 06:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:36.573 06:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 01:11:36.573 06:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 01:11:36.573 06:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=103653 01:11:36.573 06:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 01:11:36.573 06:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 01:11:36.573 [global] 01:11:36.573 thread=1 01:11:36.573 invalidate=1 01:11:36.573 rw=randrw 01:11:36.573 time_based=1 01:11:36.573 runtime=6 01:11:36.573 ioengine=libaio 01:11:36.573 direct=1 01:11:36.573 bs=4096 01:11:36.573 iodepth=128 01:11:36.573 norandommap=0 01:11:36.573 numjobs=1 01:11:36.573 01:11:36.573 verify_dump=1 01:11:36.573 verify_backlog=512 01:11:36.573 verify_state_save=0 01:11:36.573 do_verify=1 01:11:36.573 verify=crc32c-intel 01:11:36.573 [job0] 01:11:36.573 filename=/dev/nvme0n1 01:11:36.573 Could not set queue depth (nvme0n1) 01:11:36.573 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:11:36.573 fio-3.35 01:11:36.573 Starting 1 thread 01:11:37.509 06:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 01:11:37.768 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:11:38.027 06:10:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:11:39.403 06:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:11:39.403 06:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:39.403 06:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:11:39.403 06:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 01:11:39.403 06:10:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:11:39.678 06:10:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 01:11:40.649 06:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 01:11:40.649 06:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 01:11:40.649 06:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 01:11:40.649 06:10:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 103653 01:11:43.178 01:11:43.178 job0: (groupid=0, jobs=1): err= 0: pid=103680: Mon Dec 9 06:10:37 2024 01:11:43.178 read: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(276MiB/6007msec) 01:11:43.178 slat (usec): min=5, max=6339, avg=42.14, stdev=227.64 01:11:43.178 clat (usec): min=303, max=14948, avg=7254.23, stdev=1681.05 01:11:43.178 lat (usec): min=339, max=14983, avg=7296.37, stdev=1702.04 01:11:43.178 clat percentiles (usec): 01:11:43.178 | 1.00th=[ 3163], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 5800], 01:11:43.178 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 7504], 60.00th=[ 7767], 01:11:43.178 | 70.00th=[ 8029], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[ 9896], 01:11:43.178 | 99.00th=[11600], 99.50th=[12256], 99.90th=[13304], 99.95th=[13566], 01:11:43.178 | 99.99th=[14615] 01:11:43.178 bw ( KiB/s): min= 5736, max=39512, per=55.07%, avg=25938.83, stdev=9958.97, samples=12 01:11:43.178 iops : min= 1434, max= 9878, avg=6484.67, stdev=2489.81, samples=12 01:11:43.178 write: IOPS=7446, BW=29.1MiB/s (30.5MB/s)(152MiB/5236msec); 0 zone resets 01:11:43.178 slat (usec): min=14, max=3200, avg=53.35, stdev=130.70 01:11:43.178 clat (usec): min=156, max=13432, avg=6391.88, stdev=1625.44 01:11:43.178 lat (usec): min=206, max=13448, avg=6445.23, stdev=1639.79 01:11:43.178 clat percentiles (usec): 01:11:43.178 | 1.00th=[ 2769], 5.00th=[ 3490], 10.00th=[ 3916], 20.00th=[ 4686], 01:11:43.178 | 30.00th=[ 5473], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7242], 01:11:43.178 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8225], 01:11:43.178 | 99.00th=[10028], 99.50th=[11076], 99.90th=[12387], 99.95th=[12780], 01:11:43.178 | 99.99th=[13304] 01:11:43.178 bw ( KiB/s): min= 6016, max=38664, per=87.13%, avg=25954.83, stdev=9791.22, samples=12 01:11:43.178 iops : min= 1504, max= 9666, avg=6488.67, stdev=2447.88, samples=12 01:11:43.178 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 01:11:43.178 lat (msec) : 2=0.08%, 4=6.14%, 10=90.44%, 20=3.32% 01:11:43.178 cpu : usr=5.46%, sys=24.06%, ctx=9210, majf=0, minf=129 01:11:43.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 01:11:43.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:11:43.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:11:43.178 issued rwts: total=70732,38990,0,0 short=0,0,0,0 dropped=0,0,0,0 01:11:43.178 latency : target=0, window=0, percentile=100.00%, depth=128 01:11:43.178 01:11:43.178 Run status group 0 (all jobs): 01:11:43.178 READ: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=276MiB (290MB), run=6007-6007msec 01:11:43.178 WRITE: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=152MiB (160MB), run=5236-5236msec 01:11:43.178 01:11:43.178 Disk stats (read/write): 01:11:43.178 nvme0n1: ios=70146/38119, merge=0/0, ticks=474831/229003, in_queue=703834, util=98.65% 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:11:43.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 01:11:43.178 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:11:43.437 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 01:11:43.437 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 01:11:43.437 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 01:11:43.437 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 01:11:43.437 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 01:11:43.437 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 01:11:43.437 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:11:43.437 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:11:43.438 rmmod nvme_tcp 01:11:43.438 rmmod nvme_fabrics 01:11:43.438 rmmod nvme_keyring 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 103369 ']' 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 103369 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 103369 ']' 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 103369 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103369 01:11:43.438 killing process with pid 103369 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103369' 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 103369 01:11:43.438 06:10:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 103369 01:11:43.696 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:11:43.697 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 01:11:43.955 ************************************ 01:11:43.955 END TEST nvmf_target_multipath 01:11:43.955 ************************************ 01:11:43.955 01:11:43.955 real 0m21.128s 01:11:43.955 user 1m10.462s 01:11:43.955 sys 0m10.094s 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:11:43.955 ************************************ 01:11:43.955 START TEST nvmf_zcopy 01:11:43.955 ************************************ 01:11:43.955 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 01:11:44.215 * Looking for test storage... 01:11:44.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:11:44.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:44.215 --rc genhtml_branch_coverage=1 01:11:44.215 --rc genhtml_function_coverage=1 01:11:44.215 --rc genhtml_legend=1 01:11:44.215 --rc geninfo_all_blocks=1 01:11:44.215 --rc geninfo_unexecuted_blocks=1 01:11:44.215 01:11:44.215 ' 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:11:44.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:44.215 --rc genhtml_branch_coverage=1 01:11:44.215 --rc genhtml_function_coverage=1 01:11:44.215 --rc genhtml_legend=1 01:11:44.215 --rc geninfo_all_blocks=1 01:11:44.215 --rc geninfo_unexecuted_blocks=1 01:11:44.215 01:11:44.215 ' 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:11:44.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:44.215 --rc genhtml_branch_coverage=1 01:11:44.215 --rc genhtml_function_coverage=1 01:11:44.215 --rc genhtml_legend=1 01:11:44.215 --rc geninfo_all_blocks=1 01:11:44.215 --rc geninfo_unexecuted_blocks=1 01:11:44.215 01:11:44.215 ' 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:11:44.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:11:44.215 --rc genhtml_branch_coverage=1 01:11:44.215 --rc genhtml_function_coverage=1 01:11:44.215 --rc genhtml_legend=1 01:11:44.215 --rc geninfo_all_blocks=1 01:11:44.215 --rc geninfo_unexecuted_blocks=1 01:11:44.215 01:11:44.215 ' 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:11:44.215 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:11:44.216 Cannot find device "nvmf_init_br" 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:11:44.216 Cannot find device "nvmf_init_br2" 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:11:44.216 Cannot find device "nvmf_tgt_br" 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:11:44.216 Cannot find device "nvmf_tgt_br2" 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:11:44.216 Cannot find device "nvmf_init_br" 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:11:44.216 Cannot find device "nvmf_init_br2" 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 01:11:44.216 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:11:44.474 Cannot find device "nvmf_tgt_br" 01:11:44.474 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 01:11:44.474 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:11:44.475 Cannot find device "nvmf_tgt_br2" 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:11:44.475 Cannot find device "nvmf_br" 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:11:44.475 Cannot find device "nvmf_init_if" 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:11:44.475 Cannot find device "nvmf_init_if2" 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:11:44.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:11:44.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:11:44.475 06:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:11:44.475 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:11:44.475 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:11:44.475 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:11:44.475 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:11:44.475 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:11:44.475 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:11:44.475 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:11:44.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:11:44.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 01:11:44.734 01:11:44.734 --- 10.0.0.3 ping statistics --- 01:11:44.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:44.734 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:11:44.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:11:44.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 01:11:44.734 01:11:44.734 --- 10.0.0.4 ping statistics --- 01:11:44.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:44.734 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:11:44.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:11:44.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:11:44.734 01:11:44.734 --- 10.0.0.1 ping statistics --- 01:11:44.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:44.734 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:11:44.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:11:44.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 01:11:44.734 01:11:44.734 --- 10.0.0.2 ping statistics --- 01:11:44.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:11:44.734 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:11:44.734 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=104006 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 104006 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 104006 ']' 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 01:11:44.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 01:11:44.735 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:44.735 [2024-12-09 06:10:39.206213] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:11:44.735 [2024-12-09 06:10:39.207550] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:44.735 [2024-12-09 06:10:39.207657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:11:44.993 [2024-12-09 06:10:39.359111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:44.993 [2024-12-09 06:10:39.398964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:11:44.993 [2024-12-09 06:10:39.399038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:11:44.993 [2024-12-09 06:10:39.399064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:11:44.993 [2024-12-09 06:10:39.399074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:11:44.993 [2024-12-09 06:10:39.399083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:11:44.993 [2024-12-09 06:10:39.399432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:11:44.993 [2024-12-09 06:10:39.457389] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:11:44.993 [2024-12-09 06:10:39.457808] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:44.993 [2024-12-09 06:10:39.548156] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:44.993 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:44.994 [2024-12-09 06:10:39.564488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:44.994 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:45.252 malloc0 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:45.252 { 01:11:45.252 "params": { 01:11:45.252 "name": "Nvme$subsystem", 01:11:45.252 "trtype": "$TEST_TRANSPORT", 01:11:45.252 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:45.252 "adrfam": "ipv4", 01:11:45.252 "trsvcid": "$NVMF_PORT", 01:11:45.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:45.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:45.252 "hdgst": ${hdgst:-false}, 01:11:45.252 "ddgst": ${ddgst:-false} 01:11:45.252 }, 01:11:45.252 "method": "bdev_nvme_attach_controller" 01:11:45.252 } 01:11:45.252 EOF 01:11:45.252 )") 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:11:45.252 06:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:45.252 "params": { 01:11:45.252 "name": "Nvme1", 01:11:45.252 "trtype": "tcp", 01:11:45.252 "traddr": "10.0.0.3", 01:11:45.252 "adrfam": "ipv4", 01:11:45.252 "trsvcid": "4420", 01:11:45.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:45.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:45.252 "hdgst": false, 01:11:45.252 "ddgst": false 01:11:45.252 }, 01:11:45.252 "method": "bdev_nvme_attach_controller" 01:11:45.252 }' 01:11:45.252 [2024-12-09 06:10:39.655735] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:45.252 [2024-12-09 06:10:39.656259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104044 ] 01:11:45.252 [2024-12-09 06:10:39.807439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:45.510 [2024-12-09 06:10:39.848846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:11:45.510 Running I/O for 10 seconds... 01:11:47.819 5185.00 IOPS, 40.51 MiB/s [2024-12-09T06:10:43.341Z] 5206.50 IOPS, 40.68 MiB/s [2024-12-09T06:10:44.277Z] 5257.33 IOPS, 41.07 MiB/s [2024-12-09T06:10:45.223Z] 5289.25 IOPS, 41.32 MiB/s [2024-12-09T06:10:46.172Z] 5322.40 IOPS, 41.58 MiB/s [2024-12-09T06:10:47.107Z] 5402.83 IOPS, 42.21 MiB/s [2024-12-09T06:10:48.041Z] 5424.29 IOPS, 42.38 MiB/s [2024-12-09T06:10:49.417Z] 5439.88 IOPS, 42.50 MiB/s [2024-12-09T06:10:50.351Z] 5451.67 IOPS, 42.59 MiB/s [2024-12-09T06:10:50.351Z] 5453.70 IOPS, 42.61 MiB/s 01:11:55.765 Latency(us) 01:11:55.765 [2024-12-09T06:10:50.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:11:55.765 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 01:11:55.765 Verification LBA range: start 0x0 length 0x1000 01:11:55.765 Nvme1n1 : 10.02 5456.34 42.63 0.00 0.00 23383.59 2829.96 34078.72 01:11:55.765 [2024-12-09T06:10:50.351Z] =================================================================================================================== 01:11:55.765 [2024-12-09T06:10:50.351Z] Total : 5456.34 42.63 0.00 0.00 23383.59 2829.96 34078.72 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=104151 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:11:55.765 { 01:11:55.765 "params": { 01:11:55.765 "name": "Nvme$subsystem", 01:11:55.765 "trtype": "$TEST_TRANSPORT", 01:11:55.765 "traddr": "$NVMF_FIRST_TARGET_IP", 01:11:55.765 "adrfam": "ipv4", 01:11:55.765 "trsvcid": "$NVMF_PORT", 01:11:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:11:55.765 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:11:55.765 "hdgst": ${hdgst:-false}, 01:11:55.765 "ddgst": ${ddgst:-false} 01:11:55.765 }, 01:11:55.765 "method": "bdev_nvme_attach_controller" 01:11:55.765 } 01:11:55.765 EOF 01:11:55.765 )") 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 01:11:55.765 [2024-12-09 06:10:50.167981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.765 [2024-12-09 06:10:50.168042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 01:11:55.765 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 01:11:55.765 06:10:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:11:55.765 "params": { 01:11:55.765 "name": "Nvme1", 01:11:55.765 "trtype": "tcp", 01:11:55.765 "traddr": "10.0.0.3", 01:11:55.765 "adrfam": "ipv4", 01:11:55.765 "trsvcid": "4420", 01:11:55.765 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:11:55.765 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:11:55.765 "hdgst": false, 01:11:55.765 "ddgst": false 01:11:55.765 }, 01:11:55.765 "method": "bdev_nvme_attach_controller" 01:11:55.765 }' 01:11:55.765 [2024-12-09 06:10:50.179941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.765 [2024-12-09 06:10:50.179972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.765 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.765 [2024-12-09 06:10:50.191946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.765 [2024-12-09 06:10:50.191975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.765 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.765 [2024-12-09 06:10:50.203944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.765 [2024-12-09 06:10:50.203972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.765 [2024-12-09 06:10:50.207828] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:11:55.765 [2024-12-09 06:10:50.207913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104151 ] 01:11:55.765 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.765 [2024-12-09 06:10:50.215926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.765 [2024-12-09 06:10:50.215969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.765 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.765 [2024-12-09 06:10:50.227970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.765 [2024-12-09 06:10:50.228039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.765 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.765 [2024-12-09 06:10:50.240005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.765 [2024-12-09 06:10:50.240068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.765 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.765 [2024-12-09 06:10:50.251930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.765 [2024-12-09 06:10:50.251960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.765 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.765 [2024-12-09 06:10:50.259923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.766 [2024-12-09 06:10:50.259948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.766 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.766 [2024-12-09 06:10:50.271917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.766 [2024-12-09 06:10:50.271960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.766 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.766 [2024-12-09 06:10:50.283918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.766 [2024-12-09 06:10:50.283959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.766 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.766 [2024-12-09 06:10:50.295981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.766 [2024-12-09 06:10:50.296054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.766 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.766 [2024-12-09 06:10:50.308009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.766 [2024-12-09 06:10:50.308049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.766 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.766 [2024-12-09 06:10:50.319968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.766 [2024-12-09 06:10:50.320019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.766 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.766 [2024-12-09 06:10:50.331947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.766 [2024-12-09 06:10:50.332001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.766 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:55.766 [2024-12-09 06:10:50.344002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:55.766 [2024-12-09 06:10:50.344058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:55.766 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.024 [2024-12-09 06:10:50.351630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:11:56.024 [2024-12-09 06:10:50.355983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.024 [2024-12-09 06:10:50.356035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.024 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.024 [2024-12-09 06:10:50.367970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.024 [2024-12-09 06:10:50.368012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.024 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.024 [2024-12-09 06:10:50.379947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.024 [2024-12-09 06:10:50.379979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.024 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.024 [2024-12-09 06:10:50.385402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:11:56.024 [2024-12-09 06:10:50.391965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.392014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.404003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.404074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.416010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.416049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.427954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.428020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.439962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.440031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.451955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.452019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.463949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.463984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.475944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.475991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.487946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.488009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.499963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.499997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.511947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.511984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 Running I/O for 5 seconds... 01:11:56.025 [2024-12-09 06:10:50.532368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.532425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.552048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.552085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.562978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.563028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.581654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.581702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.025 [2024-12-09 06:10:50.598650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.025 [2024-12-09 06:10:50.598707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.025 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.283 [2024-12-09 06:10:50.609263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.283 [2024-12-09 06:10:50.609314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.283 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.283 [2024-12-09 06:10:50.625889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.283 [2024-12-09 06:10:50.625939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.283 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.283 [2024-12-09 06:10:50.641960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.283 [2024-12-09 06:10:50.642023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.283 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.283 [2024-12-09 06:10:50.659974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.283 [2024-12-09 06:10:50.660038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.670165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.670213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.685961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.686011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.702622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.702684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.712909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.712944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.729363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.729433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.747990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.748041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.758946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.758981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.772283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.772332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.792244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.792300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.802878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.802912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.818599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.818637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.829053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.829116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.845536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.845586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.284 [2024-12-09 06:10:50.863488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.284 [2024-12-09 06:10:50.863538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.284 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.542 [2024-12-09 06:10:50.884602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:50.884677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:50.903557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:50.903607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:50.924629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:50.924689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:50.942007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:50.942049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:50.958461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:50.958510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:50.973637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:50.973697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:50.990562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:50.990610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.001434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.001468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.016587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.016636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.035984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.036033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.046519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.046567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.062134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.062181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.077642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.077708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.094427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.094476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.110451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.110501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.543 [2024-12-09 06:10:51.120993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.543 [2024-12-09 06:10:51.121039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.543 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.802 [2024-12-09 06:10:51.137434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.802 [2024-12-09 06:10:51.137482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.802 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.802 [2024-12-09 06:10:51.154030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.802 [2024-12-09 06:10:51.154078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.802 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.802 [2024-12-09 06:10:51.170495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.802 [2024-12-09 06:10:51.170544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.181417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.181460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.195836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.195904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.205623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.205699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.221357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.221406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.236251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.236301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.255926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.256012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.267588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.267622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.278965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.279001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.290244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.290277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.305102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.305134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.324729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.324778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.344525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.344561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.362581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.362631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:56.803 [2024-12-09 06:10:51.377558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:56.803 [2024-12-09 06:10:51.377640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:56.803 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.061 [2024-12-09 06:10:51.394770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.061 [2024-12-09 06:10:51.394807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.061 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.061 [2024-12-09 06:10:51.404952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.061 [2024-12-09 06:10:51.404984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.061 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.061 [2024-12-09 06:10:51.420828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.061 [2024-12-09 06:10:51.420876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.061 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.061 [2024-12-09 06:10:51.440699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.061 [2024-12-09 06:10:51.440770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.061 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.061 [2024-12-09 06:10:51.456575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.061 [2024-12-09 06:10:51.456624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.061 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.061 [2024-12-09 06:10:51.475724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.061 [2024-12-09 06:10:51.475774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.061 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.061 [2024-12-09 06:10:51.492918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.061 [2024-12-09 06:10:51.492967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.511948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.512015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.522658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.522726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 11036.00 IOPS, 86.22 MiB/s [2024-12-09T06:10:51.648Z] 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.537279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.537343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.554814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.554855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.565310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.565374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.581839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.581890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.598609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.598656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.608897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.608946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.625328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.625363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.062 [2024-12-09 06:10:51.644192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.062 [2024-12-09 06:10:51.644229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.062 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.320 [2024-12-09 06:10:51.654626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.320 [2024-12-09 06:10:51.654695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.320 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.320 [2024-12-09 06:10:51.667552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.320 [2024-12-09 06:10:51.667600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.320 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.688581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.688633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.704361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.704428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.725149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.725199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.741181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.741240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.758494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.758570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.769383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.769433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.785985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.786036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.801691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.801750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.817884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.817949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.836778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.836811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.856352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.856418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.876169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.876216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.886751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.886785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.321 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.321 [2024-12-09 06:10:51.902234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.321 [2024-12-09 06:10:51.902301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.579 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.579 [2024-12-09 06:10:51.912826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.579 [2024-12-09 06:10:51.912859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.579 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.579 [2024-12-09 06:10:51.927986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.579 [2024-12-09 06:10:51.928034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.579 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.579 [2024-12-09 06:10:51.937798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.579 [2024-12-09 06:10:51.937829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.579 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.579 [2024-12-09 06:10:51.952918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.579 [2024-12-09 06:10:51.952966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.579 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.579 [2024-12-09 06:10:51.969979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:51.970029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:51.986320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:51.986352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:51 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.000471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.000521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.019902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.019951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.030621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.030703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.044756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.044802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.064372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.064438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.084185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.084217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.094489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.094539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.109800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.109847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.126073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.126121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.141361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.141404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.580 [2024-12-09 06:10:52.158254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.580 [2024-12-09 06:10:52.158302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.580 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.171313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.171363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.181503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.181536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.197894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.197955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.213645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.213704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.232344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.232391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.252872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.252906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.270243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.270278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.284547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.284764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.303728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.303770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.324271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.324475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.838 [2024-12-09 06:10:52.344234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.838 [2024-12-09 06:10:52.344282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.838 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.839 [2024-12-09 06:10:52.354403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.839 [2024-12-09 06:10:52.354437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.839 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.839 [2024-12-09 06:10:52.369713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.839 [2024-12-09 06:10:52.369756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.839 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.839 [2024-12-09 06:10:52.386627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.839 [2024-12-09 06:10:52.386711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.839 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.839 [2024-12-09 06:10:52.397071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.839 [2024-12-09 06:10:52.397104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.839 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:57.839 [2024-12-09 06:10:52.412707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:57.839 [2024-12-09 06:10:52.412755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:57.839 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.432254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.432287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.443396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.443460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.457179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.457250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.473673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.473745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.490167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.490217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.500986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.501019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.518507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.518558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 10989.00 IOPS, 85.85 MiB/s [2024-12-09T06:10:52.684Z] [2024-12-09 06:10:52.534268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.534319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.545005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.545036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.561741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.561773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.578493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.578541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.589188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.589233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.605825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.605873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.621756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.621791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.637600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.637688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.654530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.098 [2024-12-09 06:10:52.654581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.098 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.098 [2024-12-09 06:10:52.664995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.099 [2024-12-09 06:10:52.665043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.099 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.099 [2024-12-09 06:10:52.681839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.099 [2024-12-09 06:10:52.681874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.358 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.358 [2024-12-09 06:10:52.698580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.358 [2024-12-09 06:10:52.698617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.358 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.358 [2024-12-09 06:10:52.713893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.358 [2024-12-09 06:10:52.713928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.358 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.358 [2024-12-09 06:10:52.730505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.358 [2024-12-09 06:10:52.730542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.358 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.358 [2024-12-09 06:10:52.741483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.358 [2024-12-09 06:10:52.741530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.358 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.358 [2024-12-09 06:10:52.756826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.358 [2024-12-09 06:10:52.756861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.358 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.358 [2024-12-09 06:10:52.776451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.358 [2024-12-09 06:10:52.776487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.358 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.358 [2024-12-09 06:10:52.796303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.358 [2024-12-09 06:10:52.796353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.359 [2024-12-09 06:10:52.817296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.359 [2024-12-09 06:10:52.817332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.359 [2024-12-09 06:10:52.833819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.359 [2024-12-09 06:10:52.833854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.359 [2024-12-09 06:10:52.849775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.359 [2024-12-09 06:10:52.849810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.359 [2024-12-09 06:10:52.867658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.359 [2024-12-09 06:10:52.867698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.359 [2024-12-09 06:10:52.877910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.359 [2024-12-09 06:10:52.877959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.359 [2024-12-09 06:10:52.895651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.359 [2024-12-09 06:10:52.895700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.359 [2024-12-09 06:10:52.916901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.359 [2024-12-09 06:10:52.916936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.359 [2024-12-09 06:10:52.935984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.359 [2024-12-09 06:10:52.936020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.359 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:52.946751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:52.946797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:52.966860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:52.966895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:52.989070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:52.989118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.006366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.006434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.028241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.028292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.038461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.038527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.057234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.057283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.076252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.076301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.086933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.086968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.101753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.101832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.117954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.117987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.133715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.133793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.150947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.151014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.173076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.173107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.618 [2024-12-09 06:10:53.188338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.618 [2024-12-09 06:10:53.188385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.618 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.877 [2024-12-09 06:10:53.209162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.877 [2024-12-09 06:10:53.209212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.877 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.877 [2024-12-09 06:10:53.227092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.227140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.248175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.248226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.259681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.259738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.276002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.276086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.286319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.286399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.302378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.302434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.313325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.313373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.329070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.329118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.347524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.347574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.368449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.368498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.386374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.386443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.400291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.400339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.421195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.421227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.437350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.437399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:58.878 [2024-12-09 06:10:53.455547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:58.878 [2024-12-09 06:10:53.455598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:58.878 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.475904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.475950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.486275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.486322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.501133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.501165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.520092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.520124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 10966.67 IOPS, 85.68 MiB/s [2024-12-09T06:10:53.723Z] [2024-12-09 06:10:53.530649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.530736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.548354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.548402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.568796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.568845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.588415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.588465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.606187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.606233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.617232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.617263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.632933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.632997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.651797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.651887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.662310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.662358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.678105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.678152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.693501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.693550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.137 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.137 [2024-12-09 06:10:53.710062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.137 [2024-12-09 06:10:53.710110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.138 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.396 [2024-12-09 06:10:53.725426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.725476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.750262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.750299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.765823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.765885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.781815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.781861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.798383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.798446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.820446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.820508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.838269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.838346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.860387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.860452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.877159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.877208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.896101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.896149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.906923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.906958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.925280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.925328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.944356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.944394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.397 [2024-12-09 06:10:53.964621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.397 [2024-12-09 06:10:53.964681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.397 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.656 [2024-12-09 06:10:53.982524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.656 [2024-12-09 06:10:53.982574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.656 2024/12/09 06:10:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.656 [2024-12-09 06:10:54.004135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.656 [2024-12-09 06:10:54.004183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.656 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.656 [2024-12-09 06:10:54.015242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.656 [2024-12-09 06:10:54.015289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.656 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.656 [2024-12-09 06:10:54.032178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.656 [2024-12-09 06:10:54.032226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.042634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.042738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.057102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.057149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.075546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.075611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.096142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.096217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.107118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.107165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.121023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.121053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.137357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.137420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.154408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.154460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.164945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.164977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.181061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.181111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.200333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.200399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.219570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.219620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.657 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.657 [2024-12-09 06:10:54.241132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.657 [2024-12-09 06:10:54.241182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.258368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.258419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.281219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.281267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.297555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.297606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.313973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.314067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.330221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.330270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.345765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.345814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.362483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.362534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.385243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.385312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.401264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.401313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.420242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.420292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.430486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.917 [2024-12-09 06:10:54.430535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.917 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.917 [2024-12-09 06:10:54.448220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.918 [2024-12-09 06:10:54.448270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.918 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.918 [2024-12-09 06:10:54.468380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.918 [2024-12-09 06:10:54.468429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.918 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:11:59.918 [2024-12-09 06:10:54.488466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:11:59.918 [2024-12-09 06:10:54.488516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:11:59.918 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.507687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.507722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 10970.50 IOPS, 85.71 MiB/s [2024-12-09T06:10:54.778Z] [2024-12-09 06:10:54.528662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.528739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.546362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.546421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.561596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.561672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.578558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.578625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.601496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.601555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.616970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.617007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.636321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.636355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.655514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.655564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.676929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.676960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.694024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.694091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.710059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.710116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.726314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.726362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.742859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.742898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.192 [2024-12-09 06:10:54.755165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.192 [2024-12-09 06:10:54.755212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.192 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.776987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.777038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.799976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.800024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.810810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.810844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.825030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.825077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.844430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.844467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.863525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.863574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.884956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.884989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.904927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.904975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.921825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.921873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.938590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.938639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.949099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.949147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.462 [2024-12-09 06:10:54.965226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.462 [2024-12-09 06:10:54.965266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.462 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.463 [2024-12-09 06:10:54.984467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.463 [2024-12-09 06:10:54.984535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.463 2024/12/09 06:10:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.463 [2024-12-09 06:10:55.001439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.463 [2024-12-09 06:10:55.001491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.463 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.463 [2024-12-09 06:10:55.020275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.463 [2024-12-09 06:10:55.020324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.463 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.463 [2024-12-09 06:10:55.030871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.463 [2024-12-09 06:10:55.030905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.463 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.722 [2024-12-09 06:10:55.048833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.722 [2024-12-09 06:10:55.048866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.722 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.722 [2024-12-09 06:10:55.067928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.722 [2024-12-09 06:10:55.067977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.078732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.078787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.098986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.099067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.109402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.109449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.125624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.125684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.141773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.141853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.156555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.156588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.176409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.176463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.196671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.196731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.211325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.211374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.221488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.221521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.237904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.237953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.253484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.253534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.270287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.270346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.285405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.285455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.723 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.723 [2024-12-09 06:10:55.303681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.723 [2024-12-09 06:10:55.303731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.324081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.324134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.334087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.334135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.347982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.348029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.357947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.358009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.373956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.374003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.390145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.390194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.405960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.405991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.422372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.422421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.435186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.435234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.445285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.445332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.463147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.463195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.473321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.473354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.488447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.488499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.508517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.508585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 10986.60 IOPS, 85.83 MiB/s [2024-12-09T06:10:55.567Z] [2024-12-09 06:10:55.527601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.527681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 01:12:00.981 Latency(us) 01:12:00.981 [2024-12-09T06:10:55.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:12:00.981 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 01:12:00.981 Nvme1n1 : 5.01 10987.46 85.84 0.00 0.00 11633.88 2710.81 18826.71 01:12:00.981 [2024-12-09T06:10:55.567Z] =================================================================================================================== 01:12:00.981 [2024-12-09T06:10:55.567Z] Total : 10987.46 85.84 0.00 0.00 11633.88 2710.81 18826.71 01:12:00.981 [2024-12-09 06:10:55.536070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.536120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.547947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.548012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.981 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:00.981 [2024-12-09 06:10:55.560007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:00.981 [2024-12-09 06:10:55.560067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:00.982 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.571990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.572076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.584004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.584048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.596002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.596051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.607979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.608052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.619965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.620008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.631937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.631964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.643976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.644014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.655950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.655983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.667943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.667986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 [2024-12-09 06:10:55.679934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 01:12:01.239 [2024-12-09 06:10:55.679962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:01.239 2024/12/09 06:10:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:01.239 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (104151) - No such process 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 104151 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:12:01.239 delay0 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:01.239 06:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 01:12:01.496 [2024-12-09 06:10:55.876618] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 01:12:09.613 Initializing NVMe Controllers 01:12:09.613 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:12:09.613 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 01:12:09.613 Initialization complete. Launching workers. 01:12:09.613 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 269, failed: 16488 01:12:09.613 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16664, failed to submit 93 01:12:09.613 success 16581, unsuccessful 83, failed 0 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:12:09.613 rmmod nvme_tcp 01:12:09.613 rmmod nvme_fabrics 01:12:09.613 rmmod nvme_keyring 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 104006 ']' 01:12:09.613 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 104006 01:12:09.614 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 104006 ']' 01:12:09.614 06:11:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 104006 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104006 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:12:09.614 killing process with pid 104006 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104006' 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 104006 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 104006 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 01:12:09.614 01:12:09.614 real 0m24.979s 01:12:09.614 user 0m38.537s 01:12:09.614 sys 0m8.074s 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:09.614 ************************************ 01:12:09.614 END TEST nvmf_zcopy 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 01:12:09.614 ************************************ 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:12:09.614 ************************************ 01:12:09.614 START TEST nvmf_nmic 01:12:09.614 ************************************ 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 01:12:09.614 * Looking for test storage... 01:12:09.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 01:12:09.614 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:12:09.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:09.615 --rc genhtml_branch_coverage=1 01:12:09.615 --rc genhtml_function_coverage=1 01:12:09.615 --rc genhtml_legend=1 01:12:09.615 --rc geninfo_all_blocks=1 01:12:09.615 --rc geninfo_unexecuted_blocks=1 01:12:09.615 01:12:09.615 ' 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:12:09.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:09.615 --rc genhtml_branch_coverage=1 01:12:09.615 --rc genhtml_function_coverage=1 01:12:09.615 --rc genhtml_legend=1 01:12:09.615 --rc geninfo_all_blocks=1 01:12:09.615 --rc geninfo_unexecuted_blocks=1 01:12:09.615 01:12:09.615 ' 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:12:09.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:09.615 --rc genhtml_branch_coverage=1 01:12:09.615 --rc genhtml_function_coverage=1 01:12:09.615 --rc genhtml_legend=1 01:12:09.615 --rc geninfo_all_blocks=1 01:12:09.615 --rc geninfo_unexecuted_blocks=1 01:12:09.615 01:12:09.615 ' 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:12:09.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:09.615 --rc genhtml_branch_coverage=1 01:12:09.615 --rc genhtml_function_coverage=1 01:12:09.615 --rc genhtml_legend=1 01:12:09.615 --rc geninfo_all_blocks=1 01:12:09.615 --rc geninfo_unexecuted_blocks=1 01:12:09.615 01:12:09.615 ' 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:12:09.615 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:12:09.616 Cannot find device "nvmf_init_br" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:12:09.616 Cannot find device "nvmf_init_br2" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:12:09.616 Cannot find device "nvmf_tgt_br" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:12:09.616 Cannot find device "nvmf_tgt_br2" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:12:09.616 Cannot find device "nvmf_init_br" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:12:09.616 Cannot find device "nvmf_init_br2" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:12:09.616 Cannot find device "nvmf_tgt_br" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:12:09.616 Cannot find device "nvmf_tgt_br2" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:12:09.616 Cannot find device "nvmf_br" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:12:09.616 Cannot find device "nvmf_init_if" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:12:09.616 Cannot find device "nvmf_init_if2" 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:09.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:09.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:12:09.616 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:12:09.617 06:11:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:12:09.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:12:09.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 01:12:09.617 01:12:09.617 --- 10.0.0.3 ping statistics --- 01:12:09.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:09.617 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:12:09.617 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:12:09.617 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 01:12:09.617 01:12:09.617 --- 10.0.0.4 ping statistics --- 01:12:09.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:09.617 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:12:09.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:12:09.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 01:12:09.617 01:12:09.617 --- 10.0.0.1 ping statistics --- 01:12:09.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:09.617 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:12:09.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:12:09.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 01:12:09.617 01:12:09.617 --- 10.0.0.2 ping statistics --- 01:12:09.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:09.617 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=104528 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 104528 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 104528 ']' 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:09.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:09.617 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:09.876 [2024-12-09 06:11:04.207682] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:12:09.876 [2024-12-09 06:11:04.209106] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:12:09.876 [2024-12-09 06:11:04.209187] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:12:09.876 [2024-12-09 06:11:04.366833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:12:09.876 [2024-12-09 06:11:04.412568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:12:09.876 [2024-12-09 06:11:04.412637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:12:09.876 [2024-12-09 06:11:04.412682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:12:09.876 [2024-12-09 06:11:04.412693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:12:09.876 [2024-12-09 06:11:04.412701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:12:09.877 [2024-12-09 06:11:04.413599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:09.877 [2024-12-09 06:11:04.415063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:12:09.877 [2024-12-09 06:11:04.415265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:12:09.877 [2024-12-09 06:11:04.415272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:12:10.136 [2024-12-09 06:11:04.470253] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:12:10.136 [2024-12-09 06:11:04.470788] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:12:10.136 [2024-12-09 06:11:04.470881] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:12:10.136 [2024-12-09 06:11:04.471021] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:12:10.136 [2024-12-09 06:11:04.472785] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 [2024-12-09 06:11:04.560579] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 Malloc0 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 [2024-12-09 06:11:04.640584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:10.136 test case1: single bdev can't be used in multiple subsystems 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.136 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.136 [2024-12-09 06:11:04.668260] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 01:12:10.136 [2024-12-09 06:11:04.668299] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 01:12:10.136 [2024-12-09 06:11:04.668311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 01:12:10.136 2024/12/09 06:11:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 01:12:10.136 request: 01:12:10.136 { 01:12:10.136 "method": "nvmf_subsystem_add_ns", 01:12:10.136 "params": { 01:12:10.137 "nqn": "nqn.2016-06.io.spdk:cnode2", 01:12:10.137 "namespace": { 01:12:10.137 "bdev_name": "Malloc0", 01:12:10.137 "no_auto_visible": false, 01:12:10.137 "hide_metadata": false 01:12:10.137 } 01:12:10.137 } 01:12:10.137 } 01:12:10.137 Got JSON-RPC error response 01:12:10.137 GoRPCClient: error on JSON-RPC call 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 01:12:10.137 Adding namespace failed - expected result. 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 01:12:10.137 test case2: host connect to nvmf target in multiple paths 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:10.137 [2024-12-09 06:11:04.680393] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:10.137 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:12:10.396 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 01:12:10.396 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 01:12:10.396 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 01:12:10.396 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:12:10.396 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:12:10.396 06:11:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 01:12:12.308 06:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:12:12.308 06:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:12:12.308 06:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:12:12.308 06:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:12:12.308 06:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:12:12.308 06:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 01:12:12.308 06:11:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:12:12.580 [global] 01:12:12.580 thread=1 01:12:12.580 invalidate=1 01:12:12.580 rw=write 01:12:12.580 time_based=1 01:12:12.580 runtime=1 01:12:12.580 ioengine=libaio 01:12:12.580 direct=1 01:12:12.580 bs=4096 01:12:12.580 iodepth=1 01:12:12.580 norandommap=0 01:12:12.580 numjobs=1 01:12:12.580 01:12:12.580 verify_dump=1 01:12:12.580 verify_backlog=512 01:12:12.580 verify_state_save=0 01:12:12.580 do_verify=1 01:12:12.580 verify=crc32c-intel 01:12:12.580 [job0] 01:12:12.580 filename=/dev/nvme0n1 01:12:12.580 Could not set queue depth (nvme0n1) 01:12:12.580 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:12.580 fio-3.35 01:12:12.580 Starting 1 thread 01:12:13.957 01:12:13.957 job0: (groupid=0, jobs=1): err= 0: pid=104623: Mon Dec 9 06:11:08 2024 01:12:13.957 read: IOPS=2887, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 01:12:13.957 slat (nsec): min=12692, max=69557, avg=15818.15, stdev=4465.63 01:12:13.958 clat (usec): min=147, max=515, avg=173.94, stdev=18.75 01:12:13.958 lat (usec): min=161, max=549, avg=189.76, stdev=19.36 01:12:13.958 clat percentiles (usec): 01:12:13.958 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 01:12:13.958 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 01:12:13.958 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 194], 95.00th=[ 202], 01:12:13.958 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 449], 99.95th=[ 482], 01:12:13.958 | 99.99th=[ 515] 01:12:13.958 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 01:12:13.958 slat (nsec): min=18168, max=85081, avg=22313.13, stdev=5437.49 01:12:13.958 clat (usec): min=102, max=325, avg=121.18, stdev=12.40 01:12:13.958 lat (usec): min=122, max=355, avg=143.49, stdev=13.77 01:12:13.958 clat percentiles (usec): 01:12:13.958 | 1.00th=[ 106], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 114], 01:12:13.958 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 120], 01:12:13.958 | 70.00th=[ 123], 80.00th=[ 129], 90.00th=[ 137], 95.00th=[ 145], 01:12:13.958 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 212], 99.95th=[ 237], 01:12:13.958 | 99.99th=[ 326] 01:12:13.958 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 01:12:13.958 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:12:13.958 lat (usec) : 250=99.85%, 500=0.13%, 750=0.02% 01:12:13.958 cpu : usr=2.80%, sys=8.00%, ctx=5963, majf=0, minf=5 01:12:13.958 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:13.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:13.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:13.958 issued rwts: total=2890,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:13.958 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:13.958 01:12:13.958 Run status group 0 (all jobs): 01:12:13.958 READ: bw=11.3MiB/s (11.8MB/s), 11.3MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=11.3MiB (11.8MB), run=1001-1001msec 01:12:13.958 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 01:12:13.958 01:12:13.958 Disk stats (read/write): 01:12:13.958 nvme0n1: ios=2610/2800, merge=0/0, ticks=490/355, in_queue=845, util=91.38% 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:12:13.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:12:13.958 rmmod nvme_tcp 01:12:13.958 rmmod nvme_fabrics 01:12:13.958 rmmod nvme_keyring 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 104528 ']' 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 104528 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 104528 ']' 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 104528 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104528 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:12:13.958 killing process with pid 104528 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104528' 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 104528 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 104528 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 01:12:13.958 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 01:12:14.219 ************************************ 01:12:14.219 END TEST nvmf_nmic 01:12:14.219 ************************************ 01:12:14.219 01:12:14.219 real 0m5.277s 01:12:14.219 user 0m14.311s 01:12:14.219 sys 0m2.323s 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:14.219 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 01:12:14.479 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 01:12:14.479 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:12:14.479 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:14.479 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:12:14.479 ************************************ 01:12:14.479 START TEST nvmf_fio_target 01:12:14.479 ************************************ 01:12:14.479 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 01:12:14.479 * Looking for test storage... 01:12:14.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:12:14.479 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:12:14.479 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 01:12:14.479 06:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:12:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:14.479 --rc genhtml_branch_coverage=1 01:12:14.479 --rc genhtml_function_coverage=1 01:12:14.479 --rc genhtml_legend=1 01:12:14.479 --rc geninfo_all_blocks=1 01:12:14.479 --rc geninfo_unexecuted_blocks=1 01:12:14.479 01:12:14.479 ' 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:12:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:14.479 --rc genhtml_branch_coverage=1 01:12:14.479 --rc genhtml_function_coverage=1 01:12:14.479 --rc genhtml_legend=1 01:12:14.479 --rc geninfo_all_blocks=1 01:12:14.479 --rc geninfo_unexecuted_blocks=1 01:12:14.479 01:12:14.479 ' 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:12:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:14.479 --rc genhtml_branch_coverage=1 01:12:14.479 --rc genhtml_function_coverage=1 01:12:14.479 --rc genhtml_legend=1 01:12:14.479 --rc geninfo_all_blocks=1 01:12:14.479 --rc geninfo_unexecuted_blocks=1 01:12:14.479 01:12:14.479 ' 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:12:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:14.479 --rc genhtml_branch_coverage=1 01:12:14.479 --rc genhtml_function_coverage=1 01:12:14.479 --rc genhtml_legend=1 01:12:14.479 --rc geninfo_all_blocks=1 01:12:14.479 --rc geninfo_unexecuted_blocks=1 01:12:14.479 01:12:14.479 ' 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:14.479 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:12:14.739 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:12:14.740 Cannot find device "nvmf_init_br" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:12:14.740 Cannot find device "nvmf_init_br2" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:12:14.740 Cannot find device "nvmf_tgt_br" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:12:14.740 Cannot find device "nvmf_tgt_br2" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:12:14.740 Cannot find device "nvmf_init_br" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:12:14.740 Cannot find device "nvmf_init_br2" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:12:14.740 Cannot find device "nvmf_tgt_br" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:12:14.740 Cannot find device "nvmf_tgt_br2" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:12:14.740 Cannot find device "nvmf_br" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:12:14.740 Cannot find device "nvmf_init_if" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:12:14.740 Cannot find device "nvmf_init_if2" 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:14.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:14.740 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:12:14.740 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:12:15.000 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:12:15.001 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:12:15.001 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 01:12:15.001 01:12:15.001 --- 10.0.0.3 ping statistics --- 01:12:15.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:15.001 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:12:15.001 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:12:15.001 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 01:12:15.001 01:12:15.001 --- 10.0.0.4 ping statistics --- 01:12:15.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:15.001 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:12:15.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:12:15.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 01:12:15.001 01:12:15.001 --- 10.0.0.1 ping statistics --- 01:12:15.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:15.001 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:12:15.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:12:15.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 01:12:15.001 01:12:15.001 --- 10.0.0.2 ping statistics --- 01:12:15.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:15.001 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=104847 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 104847 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 104847 ']' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 01:12:15.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:15.001 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:12:15.261 [2024-12-09 06:11:09.587608] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:12:15.261 [2024-12-09 06:11:09.588733] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:12:15.261 [2024-12-09 06:11:09.588798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:12:15.261 [2024-12-09 06:11:09.741039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:12:15.261 [2024-12-09 06:11:09.780124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:12:15.261 [2024-12-09 06:11:09.780186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:12:15.261 [2024-12-09 06:11:09.780201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:12:15.261 [2024-12-09 06:11:09.780211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:12:15.261 [2024-12-09 06:11:09.780220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:12:15.261 [2024-12-09 06:11:09.781008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:15.261 [2024-12-09 06:11:09.781159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:12:15.261 [2024-12-09 06:11:09.781295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:12:15.261 [2024-12-09 06:11:09.781305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:12:15.261 [2024-12-09 06:11:09.837243] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:12:15.261 [2024-12-09 06:11:09.837918] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:12:15.261 [2024-12-09 06:11:09.838070] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:12:15.261 [2024-12-09 06:11:09.838222] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:12:15.261 [2024-12-09 06:11:09.838920] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:12:15.521 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:15.521 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 01:12:15.521 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:12:15.521 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 01:12:15.521 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:12:15.521 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:12:15.521 06:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 01:12:15.780 [2024-12-09 06:11:10.223201] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:15.780 06:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:12:16.039 06:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 01:12:16.039 06:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:12:16.297 06:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 01:12:16.298 06:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:12:16.866 06:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 01:12:16.866 06:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:12:17.125 06:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 01:12:17.126 06:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 01:12:17.385 06:11:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:12:17.645 06:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 01:12:17.645 06:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:12:17.904 06:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 01:12:17.904 06:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 01:12:18.175 06:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 01:12:18.175 06:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 01:12:18.433 06:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:12:18.998 06:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:12:18.998 06:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:12:18.998 06:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 01:12:18.998 06:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 01:12:19.564 06:11:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:12:19.564 [2024-12-09 06:11:14.123156] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:12:19.564 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 01:12:20.132 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 01:12:20.390 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:12:20.390 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 01:12:20.390 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 01:12:20.390 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:12:20.390 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 01:12:20.390 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 01:12:20.390 06:11:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 01:12:22.330 06:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:12:22.330 06:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:12:22.330 06:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:12:22.330 06:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 01:12:22.330 06:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:12:22.330 06:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 01:12:22.330 06:11:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 01:12:22.330 [global] 01:12:22.330 thread=1 01:12:22.330 invalidate=1 01:12:22.330 rw=write 01:12:22.330 time_based=1 01:12:22.330 runtime=1 01:12:22.330 ioengine=libaio 01:12:22.330 direct=1 01:12:22.330 bs=4096 01:12:22.330 iodepth=1 01:12:22.330 norandommap=0 01:12:22.330 numjobs=1 01:12:22.330 01:12:22.330 verify_dump=1 01:12:22.330 verify_backlog=512 01:12:22.330 verify_state_save=0 01:12:22.330 do_verify=1 01:12:22.330 verify=crc32c-intel 01:12:22.330 [job0] 01:12:22.330 filename=/dev/nvme0n1 01:12:22.330 [job1] 01:12:22.330 filename=/dev/nvme0n2 01:12:22.330 [job2] 01:12:22.330 filename=/dev/nvme0n3 01:12:22.330 [job3] 01:12:22.330 filename=/dev/nvme0n4 01:12:22.590 Could not set queue depth (nvme0n1) 01:12:22.590 Could not set queue depth (nvme0n2) 01:12:22.590 Could not set queue depth (nvme0n3) 01:12:22.590 Could not set queue depth (nvme0n4) 01:12:22.590 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:22.590 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:22.590 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:22.590 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:22.590 fio-3.35 01:12:22.590 Starting 4 threads 01:12:23.968 01:12:23.968 job0: (groupid=0, jobs=1): err= 0: pid=105126: Mon Dec 9 06:11:18 2024 01:12:23.968 read: IOPS=1564, BW=6258KiB/s (6408kB/s)(6264KiB/1001msec) 01:12:23.968 slat (nsec): min=11605, max=64516, avg=15237.84, stdev=4342.55 01:12:23.968 clat (usec): min=167, max=558, avg=294.32, stdev=39.31 01:12:23.968 lat (usec): min=181, max=575, avg=309.56, stdev=38.96 01:12:23.968 clat percentiles (usec): 01:12:23.968 | 1.00th=[ 180], 5.00th=[ 200], 10.00th=[ 269], 20.00th=[ 285], 01:12:23.968 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 01:12:23.968 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 338], 01:12:23.968 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 478], 99.95th=[ 562], 01:12:23.968 | 99.99th=[ 562] 01:12:23.968 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 01:12:23.968 slat (usec): min=11, max=127, avg=23.50, stdev= 5.53 01:12:23.968 clat (usec): min=131, max=378, avg=224.91, stdev=21.01 01:12:23.968 lat (usec): min=151, max=506, avg=248.40, stdev=21.97 01:12:23.968 clat percentiles (usec): 01:12:23.968 | 1.00th=[ 155], 5.00th=[ 194], 10.00th=[ 206], 20.00th=[ 212], 01:12:23.968 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 01:12:23.968 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 260], 01:12:23.968 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 297], 99.95th=[ 355], 01:12:23.968 | 99.99th=[ 379] 01:12:23.968 bw ( KiB/s): min= 8192, max= 8192, per=22.33%, avg=8192.00, stdev= 0.00, samples=1 01:12:23.968 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:12:23.968 lat (usec) : 250=54.98%, 500=44.99%, 750=0.03% 01:12:23.968 cpu : usr=0.70%, sys=6.30%, ctx=3614, majf=0, minf=7 01:12:23.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:23.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:23.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:23.968 issued rwts: total=1566,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:23.968 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:23.968 job1: (groupid=0, jobs=1): err= 0: pid=105127: Mon Dec 9 06:11:18 2024 01:12:23.968 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 01:12:23.968 slat (nsec): min=13211, max=44096, avg=16906.34, stdev=3312.51 01:12:23.968 clat (usec): min=175, max=404, avg=199.68, stdev=15.84 01:12:23.968 lat (usec): min=191, max=418, avg=216.59, stdev=16.35 01:12:23.968 clat percentiles (usec): 01:12:23.968 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 190], 01:12:23.968 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 01:12:23.968 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 227], 01:12:23.968 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 322], 99.95th=[ 326], 01:12:23.968 | 99.99th=[ 404] 01:12:23.968 write: IOPS=2581, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec); 0 zone resets 01:12:23.968 slat (usec): min=18, max=133, avg=23.74, stdev= 6.09 01:12:23.968 clat (usec): min=103, max=632, avg=145.02, stdev=16.76 01:12:23.968 lat (usec): min=141, max=657, avg=168.76, stdev=18.73 01:12:23.968 clat percentiles (usec): 01:12:23.968 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 137], 01:12:23.968 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 01:12:23.968 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 01:12:23.968 | 99.00th=[ 202], 99.50th=[ 217], 99.90th=[ 255], 99.95th=[ 318], 01:12:23.968 | 99.99th=[ 635] 01:12:23.968 bw ( KiB/s): min=12288, max=12288, per=33.49%, avg=12288.00, stdev= 0.00, samples=1 01:12:23.968 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 01:12:23.968 lat (usec) : 250=99.14%, 500=0.84%, 750=0.02% 01:12:23.968 cpu : usr=1.80%, sys=8.10%, ctx=5152, majf=0, minf=5 01:12:23.968 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:23.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:23.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:23.969 issued rwts: total=2560,2584,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:23.969 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:23.969 job2: (groupid=0, jobs=1): err= 0: pid=105128: Mon Dec 9 06:11:18 2024 01:12:23.969 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 01:12:23.969 slat (nsec): min=9647, max=51130, avg=16004.69, stdev=4114.65 01:12:23.969 clat (usec): min=206, max=3662, avg=309.12, stdev=177.74 01:12:23.969 lat (usec): min=222, max=3680, avg=325.12, stdev=178.17 01:12:23.969 clat percentiles (usec): 01:12:23.969 | 1.00th=[ 227], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 285], 01:12:23.969 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 01:12:23.969 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 330], 01:12:23.969 | 99.00th=[ 396], 99.50th=[ 433], 99.90th=[ 3621], 99.95th=[ 3654], 01:12:23.969 | 99.99th=[ 3654] 01:12:23.969 write: IOPS=1988, BW=7952KiB/s (8143kB/s)(7960KiB/1001msec); 0 zone resets 01:12:23.969 slat (usec): min=12, max=134, avg=23.91, stdev= 5.94 01:12:23.969 clat (usec): min=131, max=471, avg=224.38, stdev=20.36 01:12:23.969 lat (usec): min=160, max=605, avg=248.28, stdev=21.62 01:12:23.969 clat percentiles (usec): 01:12:23.969 | 1.00th=[ 161], 5.00th=[ 194], 10.00th=[ 206], 20.00th=[ 212], 01:12:23.969 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 01:12:23.969 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 255], 01:12:23.969 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 437], 99.95th=[ 474], 01:12:23.969 | 99.99th=[ 474] 01:12:23.969 bw ( KiB/s): min= 8192, max= 8192, per=22.33%, avg=8192.00, stdev= 0.00, samples=1 01:12:23.969 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:12:23.969 lat (usec) : 250=52.84%, 500=46.97% 01:12:23.969 lat (msec) : 2=0.06%, 4=0.14% 01:12:23.969 cpu : usr=1.40%, sys=5.70%, ctx=3527, majf=0, minf=11 01:12:23.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:23.969 issued rwts: total=1536,1990,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:23.969 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:23.969 job3: (groupid=0, jobs=1): err= 0: pid=105129: Mon Dec 9 06:11:18 2024 01:12:23.969 read: IOPS=2481, BW=9924KiB/s (10.2MB/s)(9924KiB/1000msec) 01:12:23.969 slat (nsec): min=12613, max=45086, avg=14703.79, stdev=2314.86 01:12:23.969 clat (usec): min=179, max=2360, avg=202.82, stdev=61.75 01:12:23.969 lat (usec): min=193, max=2376, avg=217.52, stdev=61.83 01:12:23.969 clat percentiles (usec): 01:12:23.969 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 01:12:23.969 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 01:12:23.969 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 215], 95.00th=[ 221], 01:12:23.969 | 99.00th=[ 245], 99.50th=[ 330], 99.90th=[ 824], 99.95th=[ 2147], 01:12:23.969 | 99.99th=[ 2376] 01:12:23.969 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 01:12:23.969 slat (usec): min=15, max=111, avg=21.24, stdev= 4.47 01:12:23.969 clat (usec): min=127, max=549, avg=155.51, stdev=24.81 01:12:23.969 lat (usec): min=147, max=571, avg=176.75, stdev=25.37 01:12:23.969 clat percentiles (usec): 01:12:23.969 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 01:12:23.969 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 01:12:23.969 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 188], 95.00th=[ 200], 01:12:23.969 | 99.00th=[ 227], 99.50th=[ 265], 99.90th=[ 367], 99.95th=[ 412], 01:12:23.969 | 99.99th=[ 553] 01:12:23.969 bw ( KiB/s): min=11816, max=11816, per=32.20%, avg=11816.00, stdev= 0.00, samples=1 01:12:23.969 iops : min= 2954, max= 2954, avg=2954.00, stdev= 0.00, samples=1 01:12:23.969 lat (usec) : 250=99.27%, 500=0.65%, 750=0.02%, 1000=0.02% 01:12:23.969 lat (msec) : 4=0.04% 01:12:23.969 cpu : usr=1.40%, sys=7.00%, ctx=5042, majf=0, minf=19 01:12:23.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:23.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:23.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:23.969 issued rwts: total=2481,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:23.969 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:23.969 01:12:23.969 Run status group 0 (all jobs): 01:12:23.969 READ: bw=31.8MiB/s (33.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.8MiB (33.4MB), run=1000-1001msec 01:12:23.969 WRITE: bw=35.8MiB/s (37.6MB/s), 7952KiB/s-10.1MiB/s (8143kB/s-10.6MB/s), io=35.9MiB (37.6MB), run=1000-1001msec 01:12:23.969 01:12:23.969 Disk stats (read/write): 01:12:23.969 nvme0n1: ios=1566/1536, merge=0/0, ticks=494/359, in_queue=853, util=88.08% 01:12:23.969 nvme0n2: ios=2097/2411, merge=0/0, ticks=462/377, in_queue=839, util=88.66% 01:12:23.969 nvme0n3: ios=1442/1536, merge=0/0, ticks=437/355, in_queue=792, util=88.49% 01:12:23.969 nvme0n4: ios=2048/2302, merge=0/0, ticks=426/363, in_queue=789, util=89.68% 01:12:23.969 06:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 01:12:23.969 [global] 01:12:23.969 thread=1 01:12:23.969 invalidate=1 01:12:23.969 rw=randwrite 01:12:23.969 time_based=1 01:12:23.969 runtime=1 01:12:23.969 ioengine=libaio 01:12:23.969 direct=1 01:12:23.969 bs=4096 01:12:23.969 iodepth=1 01:12:23.969 norandommap=0 01:12:23.969 numjobs=1 01:12:23.969 01:12:23.969 verify_dump=1 01:12:23.969 verify_backlog=512 01:12:23.969 verify_state_save=0 01:12:23.969 do_verify=1 01:12:23.969 verify=crc32c-intel 01:12:23.969 [job0] 01:12:23.969 filename=/dev/nvme0n1 01:12:23.969 [job1] 01:12:23.969 filename=/dev/nvme0n2 01:12:23.969 [job2] 01:12:23.969 filename=/dev/nvme0n3 01:12:23.969 [job3] 01:12:23.969 filename=/dev/nvme0n4 01:12:23.969 Could not set queue depth (nvme0n1) 01:12:23.969 Could not set queue depth (nvme0n2) 01:12:23.969 Could not set queue depth (nvme0n3) 01:12:23.969 Could not set queue depth (nvme0n4) 01:12:23.969 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:23.969 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:23.969 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:23.969 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:23.969 fio-3.35 01:12:23.969 Starting 4 threads 01:12:25.345 01:12:25.345 job0: (groupid=0, jobs=1): err= 0: pid=105188: Mon Dec 9 06:11:19 2024 01:12:25.345 read: IOPS=1717, BW=6869KiB/s (7034kB/s)(6876KiB/1001msec) 01:12:25.345 slat (nsec): min=11536, max=46143, avg=15167.70, stdev=3203.67 01:12:25.345 clat (usec): min=167, max=2113, avg=282.81, stdev=53.46 01:12:25.345 lat (usec): min=182, max=2137, avg=297.97, stdev=53.67 01:12:25.345 clat percentiles (usec): 01:12:25.345 | 1.00th=[ 182], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 265], 01:12:25.345 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 01:12:25.345 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 322], 01:12:25.345 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 766], 99.95th=[ 2114], 01:12:25.345 | 99.99th=[ 2114] 01:12:25.345 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 01:12:25.345 slat (usec): min=11, max=135, avg=22.35, stdev= 6.29 01:12:25.345 clat (usec): min=120, max=365, avg=212.35, stdev=27.05 01:12:25.345 lat (usec): min=141, max=500, avg=234.70, stdev=27.95 01:12:25.345 clat percentiles (usec): 01:12:25.345 | 1.00th=[ 129], 5.00th=[ 149], 10.00th=[ 180], 20.00th=[ 204], 01:12:25.345 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 01:12:25.345 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 251], 01:12:25.345 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 330], 01:12:25.345 | 99.99th=[ 367] 01:12:25.345 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 01:12:25.345 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:12:25.345 lat (usec) : 250=54.95%, 500=44.97%, 750=0.03%, 1000=0.03% 01:12:25.345 lat (msec) : 4=0.03% 01:12:25.345 cpu : usr=1.00%, sys=6.10%, ctx=3768, majf=0, minf=19 01:12:25.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:25.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:25.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:25.345 issued rwts: total=1719,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:25.345 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:25.345 job1: (groupid=0, jobs=1): err= 0: pid=105189: Mon Dec 9 06:11:19 2024 01:12:25.345 read: IOPS=1560, BW=6242KiB/s (6392kB/s)(6248KiB/1001msec) 01:12:25.345 slat (nsec): min=10981, max=30136, avg=13380.70, stdev=2242.00 01:12:25.345 clat (usec): min=180, max=802, avg=301.59, stdev=59.49 01:12:25.345 lat (usec): min=192, max=813, avg=314.97, stdev=60.06 01:12:25.345 clat percentiles (usec): 01:12:25.345 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 285], 01:12:25.345 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 01:12:25.345 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 404], 01:12:25.345 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 502], 99.95th=[ 799], 01:12:25.345 | 99.99th=[ 799] 01:12:25.345 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 01:12:25.345 slat (usec): min=11, max=141, avg=22.00, stdev= 5.91 01:12:25.345 clat (usec): min=124, max=902, avg=223.16, stdev=27.58 01:12:25.345 lat (usec): min=146, max=922, avg=245.16, stdev=27.76 01:12:25.345 clat percentiles (usec): 01:12:25.345 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 210], 01:12:25.345 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 01:12:25.345 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 251], 01:12:25.345 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 453], 99.95th=[ 529], 01:12:25.345 | 99.99th=[ 906] 01:12:25.345 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 01:12:25.345 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:12:25.345 lat (usec) : 250=60.33%, 500=39.56%, 750=0.06%, 1000=0.06% 01:12:25.345 cpu : usr=1.30%, sys=5.20%, ctx=3614, majf=0, minf=7 01:12:25.345 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:25.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:25.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:25.345 issued rwts: total=1562,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:25.345 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:25.345 job2: (groupid=0, jobs=1): err= 0: pid=105190: Mon Dec 9 06:11:19 2024 01:12:25.345 read: IOPS=1560, BW=6242KiB/s (6392kB/s)(6248KiB/1001msec) 01:12:25.345 slat (nsec): min=11154, max=44561, avg=13656.26, stdev=2674.96 01:12:25.345 clat (usec): min=193, max=805, avg=301.42, stdev=52.43 01:12:25.345 lat (usec): min=205, max=819, avg=315.07, stdev=53.03 01:12:25.345 clat percentiles (usec): 01:12:25.345 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 215], 20.00th=[ 285], 01:12:25.345 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 01:12:25.345 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 375], 95.00th=[ 388], 01:12:25.345 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 578], 99.95th=[ 807], 01:12:25.345 | 99.99th=[ 807] 01:12:25.346 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 01:12:25.346 slat (nsec): min=14202, max=72662, avg=22234.22, stdev=6195.22 01:12:25.346 clat (usec): min=143, max=807, avg=222.79, stdev=25.57 01:12:25.346 lat (usec): min=181, max=836, avg=245.02, stdev=25.18 01:12:25.346 clat percentiles (usec): 01:12:25.346 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 210], 01:12:25.346 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 221], 01:12:25.346 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 251], 01:12:25.346 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 396], 99.95th=[ 404], 01:12:25.346 | 99.99th=[ 807] 01:12:25.346 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 01:12:25.346 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:12:25.346 lat (usec) : 250=59.94%, 500=39.97%, 750=0.03%, 1000=0.06% 01:12:25.346 cpu : usr=1.20%, sys=5.40%, ctx=3617, majf=0, minf=11 01:12:25.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:25.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:25.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:25.346 issued rwts: total=1562,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:25.346 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:25.346 job3: (groupid=0, jobs=1): err= 0: pid=105191: Mon Dec 9 06:11:19 2024 01:12:25.346 read: IOPS=1608, BW=6434KiB/s (6588kB/s)(6440KiB/1001msec) 01:12:25.346 slat (nsec): min=8877, max=55625, avg=14487.82, stdev=4505.87 01:12:25.346 clat (usec): min=207, max=3598, avg=289.45, stdev=86.05 01:12:25.346 lat (usec): min=220, max=3619, avg=303.94, stdev=86.78 01:12:25.346 clat percentiles (usec): 01:12:25.346 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 01:12:25.346 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 01:12:25.346 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 322], 01:12:25.346 | 99.00th=[ 388], 99.50th=[ 416], 99.90th=[ 627], 99.95th=[ 3589], 01:12:25.346 | 99.99th=[ 3589] 01:12:25.346 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 01:12:25.346 slat (usec): min=12, max=139, avg=22.57, stdev= 5.44 01:12:25.346 clat (usec): min=105, max=7594, avg=223.61, stdev=177.91 01:12:25.346 lat (usec): min=153, max=7616, avg=246.18, stdev=178.17 01:12:25.346 clat percentiles (usec): 01:12:25.346 | 1.00th=[ 157], 5.00th=[ 176], 10.00th=[ 196], 20.00th=[ 208], 01:12:25.346 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 01:12:25.346 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 253], 01:12:25.346 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 2114], 99.95th=[ 2376], 01:12:25.346 | 99.99th=[ 7570] 01:12:25.346 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 01:12:25.346 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 01:12:25.346 lat (usec) : 250=52.73%, 500=47.05%, 750=0.05%, 1000=0.05% 01:12:25.346 lat (msec) : 4=0.08%, 10=0.03% 01:12:25.346 cpu : usr=1.50%, sys=5.40%, ctx=3661, majf=0, minf=11 01:12:25.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:25.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:25.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:25.346 issued rwts: total=1610,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:25.346 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:25.346 01:12:25.346 Run status group 0 (all jobs): 01:12:25.346 READ: bw=25.2MiB/s (26.4MB/s), 6242KiB/s-6869KiB/s (6392kB/s-7034kB/s), io=25.2MiB (26.4MB), run=1001-1001msec 01:12:25.346 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 01:12:25.346 01:12:25.346 Disk stats (read/write): 01:12:25.346 nvme0n1: ios=1586/1731, merge=0/0, ticks=466/386, in_queue=852, util=88.68% 01:12:25.346 nvme0n2: ios=1578/1536, merge=0/0, ticks=478/354, in_queue=832, util=88.88% 01:12:25.346 nvme0n3: ios=1528/1536, merge=0/0, ticks=458/349, in_queue=807, util=89.08% 01:12:25.346 nvme0n4: ios=1536/1586, merge=0/0, ticks=433/371, in_queue=804, util=89.21% 01:12:25.346 06:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 01:12:25.346 [global] 01:12:25.346 thread=1 01:12:25.346 invalidate=1 01:12:25.346 rw=write 01:12:25.346 time_based=1 01:12:25.346 runtime=1 01:12:25.346 ioengine=libaio 01:12:25.346 direct=1 01:12:25.346 bs=4096 01:12:25.346 iodepth=128 01:12:25.346 norandommap=0 01:12:25.346 numjobs=1 01:12:25.346 01:12:25.346 verify_dump=1 01:12:25.346 verify_backlog=512 01:12:25.346 verify_state_save=0 01:12:25.346 do_verify=1 01:12:25.346 verify=crc32c-intel 01:12:25.346 [job0] 01:12:25.346 filename=/dev/nvme0n1 01:12:25.346 [job1] 01:12:25.346 filename=/dev/nvme0n2 01:12:25.346 [job2] 01:12:25.346 filename=/dev/nvme0n3 01:12:25.346 [job3] 01:12:25.346 filename=/dev/nvme0n4 01:12:25.346 Could not set queue depth (nvme0n1) 01:12:25.346 Could not set queue depth (nvme0n2) 01:12:25.346 Could not set queue depth (nvme0n3) 01:12:25.346 Could not set queue depth (nvme0n4) 01:12:25.346 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:12:25.346 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:12:25.346 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:12:25.346 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:12:25.346 fio-3.35 01:12:25.346 Starting 4 threads 01:12:26.723 01:12:26.723 job0: (groupid=0, jobs=1): err= 0: pid=105246: Mon Dec 9 06:11:20 2024 01:12:26.723 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 01:12:26.723 slat (usec): min=7, max=7863, avg=166.62, stdev=855.18 01:12:26.723 clat (usec): min=8915, max=34965, avg=21503.67, stdev=5809.70 01:12:26.723 lat (usec): min=9345, max=37661, avg=21670.29, stdev=5810.21 01:12:26.723 clat percentiles (usec): 01:12:26.723 | 1.00th=[ 9503], 5.00th=[11338], 10.00th=[11600], 20.00th=[16909], 01:12:26.723 | 30.00th=[18220], 40.00th=[21365], 50.00th=[23200], 60.00th=[23725], 01:12:26.723 | 70.00th=[24773], 80.00th=[26346], 90.00th=[28443], 95.00th=[29230], 01:12:26.723 | 99.00th=[31327], 99.50th=[32375], 99.90th=[34866], 99.95th=[34866], 01:12:26.723 | 99.99th=[34866] 01:12:26.723 write: IOPS=3160, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1005msec); 0 zone resets 01:12:26.723 slat (usec): min=10, max=5116, avg=146.16, stdev=617.72 01:12:26.723 clat (usec): min=2058, max=41015, avg=19171.46, stdev=8127.91 01:12:26.723 lat (usec): min=5692, max=41046, avg=19317.62, stdev=8164.83 01:12:26.723 clat percentiles (usec): 01:12:26.723 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[11600], 01:12:26.723 | 30.00th=[12387], 40.00th=[16712], 50.00th=[17957], 60.00th=[19792], 01:12:26.723 | 70.00th=[20579], 80.00th=[23987], 90.00th=[34341], 95.00th=[35914], 01:12:26.723 | 99.00th=[36439], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 01:12:26.723 | 99.99th=[41157] 01:12:26.723 bw ( KiB/s): min= 8192, max=16384, per=18.46%, avg=12288.00, stdev=5792.62, samples=2 01:12:26.723 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 01:12:26.723 lat (msec) : 4=0.02%, 10=5.23%, 20=43.01%, 50=51.74% 01:12:26.723 cpu : usr=2.49%, sys=9.66%, ctx=295, majf=0, minf=1 01:12:26.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 01:12:26.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:26.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:12:26.723 issued rwts: total=3072,3176,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:26.723 latency : target=0, window=0, percentile=100.00%, depth=128 01:12:26.723 job1: (groupid=0, jobs=1): err= 0: pid=105247: Mon Dec 9 06:11:20 2024 01:12:26.723 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 01:12:26.723 slat (usec): min=6, max=9623, avg=92.47, stdev=455.63 01:12:26.723 clat (usec): min=8635, max=33194, avg=11776.72, stdev=3098.62 01:12:26.723 lat (usec): min=9046, max=33234, avg=11869.19, stdev=3109.17 01:12:26.723 clat percentiles (usec): 01:12:26.723 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10945], 01:12:26.723 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 01:12:26.723 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12125], 95.00th=[12911], 01:12:26.723 | 99.00th=[32900], 99.50th=[33162], 99.90th=[33162], 99.95th=[33162], 01:12:26.723 | 99.99th=[33162] 01:12:26.723 write: IOPS=5337, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1003msec); 0 zone resets 01:12:26.723 slat (usec): min=10, max=8351, avg=91.35, stdev=410.12 01:12:26.723 clat (usec): min=315, max=33681, avg=12271.09, stdev=4742.42 01:12:26.723 lat (usec): min=2947, max=33736, avg=12362.44, stdev=4760.04 01:12:26.723 clat percentiles (usec): 01:12:26.723 | 1.00th=[ 6456], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 01:12:26.723 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 01:12:26.723 | 70.00th=[11600], 80.00th=[11994], 90.00th=[13173], 95.00th=[26870], 01:12:26.723 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33424], 99.95th=[33817], 01:12:26.723 | 99.99th=[33817] 01:12:26.723 bw ( KiB/s): min=20480, max=21328, per=31.41%, avg=20904.00, stdev=599.63, samples=2 01:12:26.723 iops : min= 5120, max= 5332, avg=5226.00, stdev=149.91, samples=2 01:12:26.723 lat (usec) : 500=0.01% 01:12:26.723 lat (msec) : 4=0.37%, 10=13.87%, 20=80.25%, 50=5.50% 01:12:26.723 cpu : usr=4.39%, sys=13.57%, ctx=658, majf=0, minf=5 01:12:26.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 01:12:26.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:26.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:12:26.723 issued rwts: total=5120,5354,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:26.723 latency : target=0, window=0, percentile=100.00%, depth=128 01:12:26.723 job2: (groupid=0, jobs=1): err= 0: pid=105248: Mon Dec 9 06:11:20 2024 01:12:26.723 read: IOPS=4759, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1004msec) 01:12:26.723 slat (usec): min=3, max=6966, avg=99.93, stdev=554.22 01:12:26.723 clat (usec): min=1550, max=21935, avg=12849.28, stdev=1907.75 01:12:26.723 lat (usec): min=4934, max=21948, avg=12949.21, stdev=1936.39 01:12:26.723 clat percentiles (usec): 01:12:26.723 | 1.00th=[ 7701], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 01:12:26.723 | 30.00th=[11994], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 01:12:26.723 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15270], 95.00th=[15926], 01:12:26.723 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 01:12:26.723 | 99.99th=[21890] 01:12:26.723 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 01:12:26.723 slat (usec): min=10, max=5386, avg=95.01, stdev=432.21 01:12:26.723 clat (usec): min=6736, max=18517, avg=12793.91, stdev=1423.15 01:12:26.723 lat (usec): min=6760, max=18536, avg=12888.92, stdev=1475.51 01:12:26.723 clat percentiles (usec): 01:12:26.723 | 1.00th=[ 8848], 5.00th=[10814], 10.00th=[11731], 20.00th=[12125], 01:12:26.723 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 01:12:26.723 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13960], 95.00th=[15664], 01:12:26.723 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 01:12:26.723 | 99.99th=[18482] 01:12:26.723 bw ( KiB/s): min=20480, max=20480, per=30.77%, avg=20480.00, stdev= 0.00, samples=2 01:12:26.723 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 01:12:26.723 lat (msec) : 2=0.01%, 10=3.55%, 20=96.43%, 50=0.01% 01:12:26.723 cpu : usr=3.69%, sys=14.16%, ctx=572, majf=0, minf=1 01:12:26.723 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 01:12:26.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:26.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:12:26.723 issued rwts: total=4779,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:26.723 latency : target=0, window=0, percentile=100.00%, depth=128 01:12:26.723 job3: (groupid=0, jobs=1): err= 0: pid=105249: Mon Dec 9 06:11:20 2024 01:12:26.723 read: IOPS=2611, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1005msec) 01:12:26.723 slat (usec): min=2, max=7517, avg=175.89, stdev=854.85 01:12:26.723 clat (usec): min=1267, max=73111, avg=22213.41, stdev=4792.74 01:12:26.723 lat (usec): min=5556, max=73118, avg=22389.30, stdev=4755.54 01:12:26.723 clat percentiles (usec): 01:12:26.723 | 1.00th=[ 5997], 5.00th=[16909], 10.00th=[17695], 20.00th=[18220], 01:12:26.723 | 30.00th=[20055], 40.00th=[21103], 50.00th=[23200], 60.00th=[23725], 01:12:26.723 | 70.00th=[24249], 80.00th=[25297], 90.00th=[27132], 95.00th=[27919], 01:12:26.723 | 99.00th=[28443], 99.50th=[29492], 99.90th=[72877], 99.95th=[72877], 01:12:26.723 | 99.99th=[72877] 01:12:26.723 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 01:12:26.723 slat (usec): min=11, max=12650, avg=168.39, stdev=757.20 01:12:26.723 clat (usec): min=13060, max=37483, avg=22264.83, stdev=5640.05 01:12:26.723 lat (usec): min=13172, max=37511, avg=22433.22, stdev=5656.47 01:12:26.723 clat percentiles (usec): 01:12:26.723 | 1.00th=[14877], 5.00th=[16712], 10.00th=[17171], 20.00th=[17957], 01:12:26.723 | 30.00th=[19006], 40.00th=[19792], 50.00th=[20317], 60.00th=[21365], 01:12:26.723 | 70.00th=[22152], 80.00th=[25297], 90.00th=[33817], 95.00th=[34866], 01:12:26.723 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 01:12:26.723 | 99.99th=[37487] 01:12:26.723 bw ( KiB/s): min=11808, max=12264, per=18.08%, avg=12036.00, stdev=322.44, samples=2 01:12:26.724 iops : min= 2952, max= 3066, avg=3009.00, stdev=80.61, samples=2 01:12:26.724 lat (msec) : 2=0.02%, 10=0.56%, 20=35.97%, 50=63.33%, 100=0.12% 01:12:26.724 cpu : usr=2.79%, sys=8.27%, ctx=253, majf=0, minf=8 01:12:26.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 01:12:26.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:26.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:12:26.724 issued rwts: total=2625,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:26.724 latency : target=0, window=0, percentile=100.00%, depth=128 01:12:26.724 01:12:26.724 Run status group 0 (all jobs): 01:12:26.724 READ: bw=60.6MiB/s (63.6MB/s), 10.2MiB/s-19.9MiB/s (10.7MB/s-20.9MB/s), io=60.9MiB (63.9MB), run=1003-1005msec 01:12:26.724 WRITE: bw=65.0MiB/s (68.2MB/s), 11.9MiB/s-20.9MiB/s (12.5MB/s-21.9MB/s), io=65.3MiB (68.5MB), run=1003-1005msec 01:12:26.724 01:12:26.724 Disk stats (read/write): 01:12:26.724 nvme0n1: ios=2609/3012, merge=0/0, ticks=12398/13016, in_queue=25414, util=88.06% 01:12:26.724 nvme0n2: ios=4353/4608, merge=0/0, ticks=12066/12732, in_queue=24798, util=89.07% 01:12:26.724 nvme0n3: ios=4096/4380, merge=0/0, ticks=25039/25251, in_queue=50290, util=89.37% 01:12:26.724 nvme0n4: ios=2304/2560, merge=0/0, ticks=13240/13931, in_queue=27171, util=89.42% 01:12:26.724 06:11:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 01:12:26.724 [global] 01:12:26.724 thread=1 01:12:26.724 invalidate=1 01:12:26.724 rw=randwrite 01:12:26.724 time_based=1 01:12:26.724 runtime=1 01:12:26.724 ioengine=libaio 01:12:26.724 direct=1 01:12:26.724 bs=4096 01:12:26.724 iodepth=128 01:12:26.724 norandommap=0 01:12:26.724 numjobs=1 01:12:26.724 01:12:26.724 verify_dump=1 01:12:26.724 verify_backlog=512 01:12:26.724 verify_state_save=0 01:12:26.724 do_verify=1 01:12:26.724 verify=crc32c-intel 01:12:26.724 [job0] 01:12:26.724 filename=/dev/nvme0n1 01:12:26.724 [job1] 01:12:26.724 filename=/dev/nvme0n2 01:12:26.724 [job2] 01:12:26.724 filename=/dev/nvme0n3 01:12:26.724 [job3] 01:12:26.724 filename=/dev/nvme0n4 01:12:26.724 Could not set queue depth (nvme0n1) 01:12:26.724 Could not set queue depth (nvme0n2) 01:12:26.724 Could not set queue depth (nvme0n3) 01:12:26.724 Could not set queue depth (nvme0n4) 01:12:26.724 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:12:26.724 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:12:26.724 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:12:26.724 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 01:12:26.724 fio-3.35 01:12:26.724 Starting 4 threads 01:12:28.100 01:12:28.100 job0: (groupid=0, jobs=1): err= 0: pid=105302: Mon Dec 9 06:11:22 2024 01:12:28.100 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 01:12:28.100 slat (usec): min=2, max=7876, avg=191.59, stdev=798.89 01:12:28.100 clat (usec): min=17929, max=31492, avg=24527.09, stdev=2199.41 01:12:28.100 lat (usec): min=19219, max=31527, avg=24718.68, stdev=2151.57 01:12:28.100 clat percentiles (usec): 01:12:28.100 | 1.00th=[19530], 5.00th=[20841], 10.00th=[21365], 20.00th=[22676], 01:12:28.100 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24773], 01:12:28.100 | 70.00th=[25560], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 01:12:28.100 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30278], 99.95th=[30540], 01:12:28.100 | 99.99th=[31589] 01:12:28.100 write: IOPS=2811, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1002msec); 0 zone resets 01:12:28.100 slat (usec): min=10, max=6007, avg=173.95, stdev=791.90 01:12:28.100 clat (usec): min=138, max=29518, avg=22501.47, stdev=3366.34 01:12:28.100 lat (usec): min=1685, max=29548, avg=22675.43, stdev=3299.62 01:12:28.100 clat percentiles (usec): 01:12:28.100 | 1.00th=[ 2311], 5.00th=[17695], 10.00th=[20317], 20.00th=[22152], 01:12:28.100 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23200], 01:12:28.100 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25560], 01:12:28.100 | 99.00th=[27395], 99.50th=[28705], 99.90th=[29230], 99.95th=[29492], 01:12:28.100 | 99.99th=[29492] 01:12:28.100 bw ( KiB/s): min= 9232, max=12288, per=16.30%, avg=10760.00, stdev=2160.92, samples=2 01:12:28.100 iops : min= 2308, max= 3072, avg=2690.00, stdev=540.23, samples=2 01:12:28.100 lat (usec) : 250=0.02% 01:12:28.100 lat (msec) : 2=0.26%, 4=0.33%, 10=0.60%, 20=4.39%, 50=94.40% 01:12:28.100 cpu : usr=2.00%, sys=7.79%, ctx=613, majf=0, minf=4 01:12:28.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 01:12:28.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:28.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:12:28.100 issued rwts: total=2560,2817,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:28.100 latency : target=0, window=0, percentile=100.00%, depth=128 01:12:28.100 job1: (groupid=0, jobs=1): err= 0: pid=105303: Mon Dec 9 06:11:22 2024 01:12:28.100 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 01:12:28.100 slat (usec): min=7, max=2731, avg=86.52, stdev=406.47 01:12:28.100 clat (usec): min=8506, max=13778, avg=11369.72, stdev=634.46 01:12:28.100 lat (usec): min=9051, max=15204, avg=11456.24, stdev=520.43 01:12:28.100 clat percentiles (usec): 01:12:28.100 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11207], 01:12:28.100 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11469], 01:12:28.100 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[12125], 01:12:28.100 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13698], 99.95th=[13829], 01:12:28.100 | 99.99th=[13829] 01:12:28.100 write: IOPS=5792, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1002msec); 0 zone resets 01:12:28.100 slat (usec): min=10, max=2732, avg=81.56, stdev=335.46 01:12:28.100 clat (usec): min=267, max=13434, avg=10784.55, stdev=1277.60 01:12:28.100 lat (usec): min=2814, max=13465, avg=10866.11, stdev=1269.94 01:12:28.100 clat percentiles (usec): 01:12:28.100 | 1.00th=[ 6390], 5.00th=[ 9241], 10.00th=[ 9372], 20.00th=[ 9634], 01:12:28.100 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11076], 60.00th=[11338], 01:12:28.100 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12387], 01:12:28.100 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13435], 99.95th=[13435], 01:12:28.100 | 99.99th=[13435] 01:12:28.100 bw ( KiB/s): min=20896, max=24512, per=34.40%, avg=22704.00, stdev=2556.90, samples=2 01:12:28.101 iops : min= 5224, max= 6128, avg=5676.00, stdev=639.22, samples=2 01:12:28.101 lat (usec) : 500=0.01% 01:12:28.101 lat (msec) : 4=0.32%, 10=19.13%, 20=80.54% 01:12:28.101 cpu : usr=4.20%, sys=14.19%, ctx=592, majf=0, minf=1 01:12:28.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 01:12:28.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:28.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:12:28.101 issued rwts: total=5632,5804,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:28.101 latency : target=0, window=0, percentile=100.00%, depth=128 01:12:28.101 job2: (groupid=0, jobs=1): err= 0: pid=105304: Mon Dec 9 06:11:22 2024 01:12:28.101 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 01:12:28.101 slat (usec): min=3, max=7440, avg=183.23, stdev=796.09 01:12:28.101 clat (usec): min=16677, max=29195, avg=23567.67, stdev=2143.13 01:12:28.101 lat (usec): min=17414, max=29209, avg=23750.90, stdev=2107.38 01:12:28.101 clat percentiles (usec): 01:12:28.101 | 1.00th=[18220], 5.00th=[19792], 10.00th=[20579], 20.00th=[21365], 01:12:28.101 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23987], 60.00th=[24249], 01:12:28.101 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26084], 95.00th=[26870], 01:12:28.101 | 99.00th=[28443], 99.50th=[28443], 99.90th=[29230], 99.95th=[29230], 01:12:28.101 | 99.99th=[29230] 01:12:28.101 write: IOPS=2895, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1004msec); 0 zone resets 01:12:28.101 slat (usec): min=11, max=6439, avg=175.95, stdev=789.70 01:12:28.101 clat (usec): min=2163, max=29612, avg=22675.67, stdev=2633.75 01:12:28.101 lat (usec): min=6462, max=29631, avg=22851.62, stdev=2531.97 01:12:28.101 clat percentiles (usec): 01:12:28.101 | 1.00th=[ 7177], 5.00th=[18482], 10.00th=[20579], 20.00th=[22152], 01:12:28.101 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 01:12:28.101 | 70.00th=[23725], 80.00th=[23725], 90.00th=[24773], 95.00th=[25822], 01:12:28.101 | 99.00th=[27657], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 01:12:28.101 | 99.99th=[29492] 01:12:28.101 bw ( KiB/s): min= 9944, max=12312, per=16.86%, avg=11128.00, stdev=1674.43, samples=2 01:12:28.101 iops : min= 2486, max= 3078, avg=2782.00, stdev=418.61, samples=2 01:12:28.101 lat (msec) : 4=0.02%, 10=0.59%, 20=6.20%, 50=93.20% 01:12:28.101 cpu : usr=2.49%, sys=7.58%, ctx=554, majf=0, minf=7 01:12:28.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 01:12:28.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:28.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:12:28.101 issued rwts: total=2560,2907,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:28.101 latency : target=0, window=0, percentile=100.00%, depth=128 01:12:28.101 job3: (groupid=0, jobs=1): err= 0: pid=105306: Mon Dec 9 06:11:22 2024 01:12:28.101 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 01:12:28.101 slat (usec): min=9, max=4210, avg=102.81, stdev=484.74 01:12:28.101 clat (usec): min=9993, max=16407, avg=13413.84, stdev=889.80 01:12:28.101 lat (usec): min=10354, max=16420, avg=13516.65, stdev=773.51 01:12:28.101 clat percentiles (usec): 01:12:28.101 | 1.00th=[10683], 5.00th=[11207], 10.00th=[12387], 20.00th=[13042], 01:12:28.101 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 01:12:28.101 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14091], 95.00th=[14615], 01:12:28.101 | 99.00th=[15664], 99.50th=[15795], 99.90th=[16319], 99.95th=[16450], 01:12:28.101 | 99.99th=[16450] 01:12:28.101 write: IOPS=5024, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1003msec); 0 zone resets 01:12:28.101 slat (usec): min=12, max=4068, avg=97.11, stdev=410.70 01:12:28.101 clat (usec): min=2228, max=17301, avg=12869.66, stdev=1595.49 01:12:28.101 lat (usec): min=2922, max=17340, avg=12966.77, stdev=1595.82 01:12:28.101 clat percentiles (usec): 01:12:28.101 | 1.00th=[ 7963], 5.00th=[11076], 10.00th=[11207], 20.00th=[11600], 01:12:28.101 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12911], 60.00th=[13566], 01:12:28.101 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14615], 95.00th=[14746], 01:12:28.101 | 99.00th=[15795], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 01:12:28.101 | 99.99th=[17433] 01:12:28.101 bw ( KiB/s): min=18824, max=20521, per=29.80%, avg=19672.50, stdev=1199.96, samples=2 01:12:28.101 iops : min= 4706, max= 5130, avg=4918.00, stdev=299.81, samples=2 01:12:28.101 lat (msec) : 4=0.17%, 10=0.62%, 20=99.21% 01:12:28.101 cpu : usr=3.79%, sys=13.47%, ctx=545, majf=0, minf=1 01:12:28.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 01:12:28.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:28.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 01:12:28.101 issued rwts: total=4608,5040,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:28.101 latency : target=0, window=0, percentile=100.00%, depth=128 01:12:28.101 01:12:28.101 Run status group 0 (all jobs): 01:12:28.101 READ: bw=59.8MiB/s (62.7MB/s), 9.96MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=60.0MiB (62.9MB), run=1002-1004msec 01:12:28.101 WRITE: bw=64.5MiB/s (67.6MB/s), 11.0MiB/s-22.6MiB/s (11.5MB/s-23.7MB/s), io=64.7MiB (67.9MB), run=1002-1004msec 01:12:28.101 01:12:28.101 Disk stats (read/write): 01:12:28.101 nvme0n1: ios=2098/2560, merge=0/0, ticks=12040/12687, in_queue=24727, util=88.47% 01:12:28.101 nvme0n2: ios=4803/5120, merge=0/0, ticks=12449/12057, in_queue=24506, util=88.98% 01:12:28.101 nvme0n3: ios=2144/2560, merge=0/0, ticks=12052/12915, in_queue=24967, util=89.19% 01:12:28.101 nvme0n4: ios=4096/4218, merge=0/0, ticks=12702/12048, in_queue=24750, util=89.74% 01:12:28.101 06:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 01:12:28.101 06:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=105328 01:12:28.101 06:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 01:12:28.101 06:11:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 01:12:28.101 [global] 01:12:28.101 thread=1 01:12:28.101 invalidate=1 01:12:28.101 rw=read 01:12:28.101 time_based=1 01:12:28.101 runtime=10 01:12:28.101 ioengine=libaio 01:12:28.101 direct=1 01:12:28.101 bs=4096 01:12:28.101 iodepth=1 01:12:28.101 norandommap=1 01:12:28.101 numjobs=1 01:12:28.101 01:12:28.101 [job0] 01:12:28.101 filename=/dev/nvme0n1 01:12:28.101 [job1] 01:12:28.101 filename=/dev/nvme0n2 01:12:28.101 [job2] 01:12:28.101 filename=/dev/nvme0n3 01:12:28.101 [job3] 01:12:28.101 filename=/dev/nvme0n4 01:12:28.101 Could not set queue depth (nvme0n1) 01:12:28.101 Could not set queue depth (nvme0n2) 01:12:28.101 Could not set queue depth (nvme0n3) 01:12:28.101 Could not set queue depth (nvme0n4) 01:12:28.101 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:28.101 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:28.101 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:28.101 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 01:12:28.101 fio-3.35 01:12:28.101 Starting 4 threads 01:12:31.386 06:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 01:12:31.386 fio: pid=105375, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:12:31.386 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=53362688, buflen=4096 01:12:31.386 06:11:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 01:12:31.691 fio: pid=105374, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:12:31.691 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=30126080, buflen=4096 01:12:31.691 06:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:12:31.691 06:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 01:12:31.949 fio: pid=105372, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:12:31.949 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=62590976, buflen=4096 01:12:31.949 06:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:12:31.949 06:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 01:12:32.207 fio: pid=105373, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 01:12:32.207 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61280256, buflen=4096 01:12:32.207 06:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:12:32.207 01:12:32.207 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105372: Mon Dec 9 06:11:26 2024 01:12:32.207 read: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(59.7MiB/3670msec) 01:12:32.207 slat (usec): min=8, max=14665, avg=19.34, stdev=184.54 01:12:32.207 clat (usec): min=157, max=7913, avg=219.46, stdev=98.53 01:12:32.207 lat (usec): min=171, max=14884, avg=238.80, stdev=209.81 01:12:32.207 clat percentiles (usec): 01:12:32.207 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 01:12:32.207 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 01:12:32.207 | 70.00th=[ 210], 80.00th=[ 231], 90.00th=[ 277], 95.00th=[ 326], 01:12:32.207 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 758], 99.95th=[ 1500], 01:12:32.207 | 99.99th=[ 4293] 01:12:32.207 bw ( KiB/s): min=12819, max=18488, per=33.18%, avg=16839.29, stdev=2471.30, samples=7 01:12:32.207 iops : min= 3204, max= 4622, avg=4209.71, stdev=618.03, samples=7 01:12:32.207 lat (usec) : 250=86.51%, 500=13.27%, 750=0.10%, 1000=0.03% 01:12:32.207 lat (msec) : 2=0.05%, 4=0.01%, 10=0.02% 01:12:32.207 cpu : usr=1.31%, sys=5.45%, ctx=15309, majf=0, minf=1 01:12:32.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:32.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:32.207 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:32.207 issued rwts: total=15282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:32.207 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:32.207 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105373: Mon Dec 9 06:11:26 2024 01:12:32.207 read: IOPS=3749, BW=14.6MiB/s (15.4MB/s)(58.4MiB/3990msec) 01:12:32.207 slat (usec): min=8, max=9528, avg=19.52, stdev=158.06 01:12:32.207 clat (usec): min=155, max=15000, avg=245.50, stdev=174.01 01:12:32.207 lat (usec): min=170, max=15015, avg=265.02, stdev=235.91 01:12:32.207 clat percentiles (usec): 01:12:32.207 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 182], 01:12:32.207 | 30.00th=[ 188], 40.00th=[ 202], 50.00th=[ 251], 60.00th=[ 265], 01:12:32.207 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 343], 01:12:32.207 | 99.00th=[ 424], 99.50th=[ 469], 99.90th=[ 1401], 99.95th=[ 1860], 01:12:32.207 | 99.99th=[ 7832] 01:12:32.207 bw ( KiB/s): min=12008, max=16200, per=29.34%, avg=14890.00, stdev=1666.92, samples=7 01:12:32.207 iops : min= 3002, max= 4050, avg=3722.43, stdev=416.82, samples=7 01:12:32.207 lat (usec) : 250=49.57%, 500=50.01%, 750=0.24%, 1000=0.04% 01:12:32.207 lat (msec) : 2=0.10%, 4=0.01%, 10=0.03%, 20=0.01% 01:12:32.207 cpu : usr=0.75%, sys=5.39%, ctx=14996, majf=0, minf=2 01:12:32.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:32.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:32.207 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:32.207 issued rwts: total=14962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:32.207 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:32.207 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105374: Mon Dec 9 06:11:26 2024 01:12:32.207 read: IOPS=2228, BW=8912KiB/s (9126kB/s)(28.7MiB/3301msec) 01:12:32.207 slat (usec): min=10, max=9431, avg=26.81, stdev=135.39 01:12:32.207 clat (usec): min=141, max=3855, avg=419.33, stdev=99.82 01:12:32.207 lat (usec): min=181, max=9724, avg=446.14, stdev=167.79 01:12:32.207 clat percentiles (usec): 01:12:32.207 | 1.00th=[ 255], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 326], 01:12:32.207 | 30.00th=[ 396], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 453], 01:12:32.207 | 70.00th=[ 461], 80.00th=[ 469], 90.00th=[ 478], 95.00th=[ 486], 01:12:32.207 | 99.00th=[ 529], 99.50th=[ 676], 99.90th=[ 1483], 99.95th=[ 2114], 01:12:32.207 | 99.99th=[ 3851] 01:12:32.207 bw ( KiB/s): min= 8248, max= 9576, per=17.20%, avg=8728.00, stdev=516.38, samples=6 01:12:32.207 iops : min= 2062, max= 2394, avg=2182.00, stdev=129.10, samples=6 01:12:32.207 lat (usec) : 250=0.83%, 500=97.35%, 750=1.50%, 1000=0.14% 01:12:32.207 lat (msec) : 2=0.12%, 4=0.05% 01:12:32.207 cpu : usr=1.06%, sys=4.61%, ctx=7360, majf=0, minf=1 01:12:32.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:32.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:32.207 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:32.207 issued rwts: total=7356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:32.207 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:32.208 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105375: Mon Dec 9 06:11:26 2024 01:12:32.208 read: IOPS=4394, BW=17.2MiB/s (18.0MB/s)(50.9MiB/2965msec) 01:12:32.208 slat (nsec): min=13048, max=78416, avg=14989.86, stdev=2385.34 01:12:32.208 clat (usec): min=174, max=4281, avg=211.01, stdev=69.63 01:12:32.208 lat (usec): min=190, max=4296, avg=226.00, stdev=70.20 01:12:32.208 clat percentiles (usec): 01:12:32.208 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 01:12:32.208 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 202], 01:12:32.208 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 273], 01:12:32.208 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 603], 99.95th=[ 1450], 01:12:32.208 | 99.99th=[ 3392] 01:12:32.208 bw ( KiB/s): min=17152, max=18592, per=35.84%, avg=18187.20, stdev=586.83, samples=5 01:12:32.208 iops : min= 4288, max= 4648, avg=4546.80, stdev=146.71, samples=5 01:12:32.208 lat (usec) : 250=93.30%, 500=6.53%, 750=0.10%, 1000=0.01% 01:12:32.208 lat (msec) : 2=0.02%, 4=0.03%, 10=0.01% 01:12:32.208 cpu : usr=1.11%, sys=5.67%, ctx=13029, majf=0, minf=2 01:12:32.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:12:32.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:32.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:12:32.208 issued rwts: total=13029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:12:32.208 latency : target=0, window=0, percentile=100.00%, depth=1 01:12:32.208 01:12:32.208 Run status group 0 (all jobs): 01:12:32.208 READ: bw=49.6MiB/s (52.0MB/s), 8912KiB/s-17.2MiB/s (9126kB/s-18.0MB/s), io=198MiB (207MB), run=2965-3990msec 01:12:32.208 01:12:32.208 Disk stats (read/write): 01:12:32.208 nvme0n1: ios=15119/0, merge=0/0, ticks=3315/0, in_queue=3315, util=95.52% 01:12:32.208 nvme0n2: ios=14508/0, merge=0/0, ticks=3578/0, in_queue=3578, util=95.86% 01:12:32.208 nvme0n3: ios=6835/0, merge=0/0, ticks=2937/0, in_queue=2937, util=96.28% 01:12:32.208 nvme0n4: ios=12821/0, merge=0/0, ticks=2708/0, in_queue=2708, util=96.64% 01:12:32.208 06:11:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 01:12:32.465 06:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:12:32.465 06:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 01:12:32.723 06:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:12:32.723 06:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 01:12:33.289 06:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:12:33.289 06:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 01:12:33.548 06:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 01:12:33.548 06:11:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 105328 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:12:33.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:12:33.808 nvmf hotplug test: fio failed as expected 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 01:12:33.808 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 01:12:34.066 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:12:34.066 rmmod nvme_tcp 01:12:34.066 rmmod nvme_fabrics 01:12:34.066 rmmod nvme_keyring 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 104847 ']' 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 104847 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 104847 ']' 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 104847 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104847 01:12:34.325 killing process with pid 104847 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104847' 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 104847 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 104847 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:12:34.325 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:12:34.584 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:12:34.584 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:12:34.584 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:12:34.584 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:12:34.585 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:12:34.585 06:11:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 01:12:34.585 01:12:34.585 real 0m20.205s 01:12:34.585 user 1m1.334s 01:12:34.585 sys 0m11.777s 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:34.585 ************************************ 01:12:34.585 END TEST nvmf_fio_target 01:12:34.585 ************************************ 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:12:34.585 ************************************ 01:12:34.585 START TEST nvmf_bdevio 01:12:34.585 ************************************ 01:12:34.585 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 01:12:34.845 * Looking for test storage... 01:12:34.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:12:34.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:34.845 --rc genhtml_branch_coverage=1 01:12:34.845 --rc genhtml_function_coverage=1 01:12:34.845 --rc genhtml_legend=1 01:12:34.845 --rc geninfo_all_blocks=1 01:12:34.845 --rc geninfo_unexecuted_blocks=1 01:12:34.845 01:12:34.845 ' 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:12:34.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:34.845 --rc genhtml_branch_coverage=1 01:12:34.845 --rc genhtml_function_coverage=1 01:12:34.845 --rc genhtml_legend=1 01:12:34.845 --rc geninfo_all_blocks=1 01:12:34.845 --rc geninfo_unexecuted_blocks=1 01:12:34.845 01:12:34.845 ' 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:12:34.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:34.845 --rc genhtml_branch_coverage=1 01:12:34.845 --rc genhtml_function_coverage=1 01:12:34.845 --rc genhtml_legend=1 01:12:34.845 --rc geninfo_all_blocks=1 01:12:34.845 --rc geninfo_unexecuted_blocks=1 01:12:34.845 01:12:34.845 ' 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:12:34.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:34.845 --rc genhtml_branch_coverage=1 01:12:34.845 --rc genhtml_function_coverage=1 01:12:34.845 --rc genhtml_legend=1 01:12:34.845 --rc geninfo_all_blocks=1 01:12:34.845 --rc geninfo_unexecuted_blocks=1 01:12:34.845 01:12:34.845 ' 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:34.845 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:12:34.846 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:12:34.846 Cannot find device "nvmf_init_br" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:12:34.847 Cannot find device "nvmf_init_br2" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:12:34.847 Cannot find device "nvmf_tgt_br" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:12:34.847 Cannot find device "nvmf_tgt_br2" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:12:34.847 Cannot find device "nvmf_init_br" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:12:34.847 Cannot find device "nvmf_init_br2" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:12:34.847 Cannot find device "nvmf_tgt_br" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:12:34.847 Cannot find device "nvmf_tgt_br2" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:12:34.847 Cannot find device "nvmf_br" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:12:34.847 Cannot find device "nvmf_init_if" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:12:34.847 Cannot find device "nvmf_init_if2" 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:34.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 01:12:34.847 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:35.106 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:12:35.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:12:35.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 01:12:35.106 01:12:35.106 --- 10.0.0.3 ping statistics --- 01:12:35.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:35.106 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:12:35.106 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:12:35.106 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 01:12:35.106 01:12:35.106 --- 10.0.0.4 ping statistics --- 01:12:35.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:35.106 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 01:12:35.106 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:12:35.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:12:35.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 01:12:35.106 01:12:35.106 --- 10.0.0.1 ping statistics --- 01:12:35.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:35.107 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:12:35.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:12:35.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 01:12:35.107 01:12:35.107 --- 10.0.0.2 ping statistics --- 01:12:35.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:35.107 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=105760 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 105760 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 105760 ']' 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:35.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:35.107 06:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:35.366 [2024-12-09 06:11:29.743461] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:12:35.366 [2024-12-09 06:11:29.744743] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:12:35.366 [2024-12-09 06:11:29.744823] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:12:35.366 [2024-12-09 06:11:29.896896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:12:35.366 [2024-12-09 06:11:29.936243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:12:35.366 [2024-12-09 06:11:29.936312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:12:35.366 [2024-12-09 06:11:29.936327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:12:35.366 [2024-12-09 06:11:29.936337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:12:35.366 [2024-12-09 06:11:29.936345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:12:35.366 [2024-12-09 06:11:29.937291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 01:12:35.366 [2024-12-09 06:11:29.937445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 01:12:35.366 [2024-12-09 06:11:29.937903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 01:12:35.366 [2024-12-09 06:11:29.937908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:12:35.624 [2024-12-09 06:11:29.994499] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:12:35.624 [2024-12-09 06:11:29.995022] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:12:35.624 [2024-12-09 06:11:29.995073] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:12:35.624 [2024-12-09 06:11:29.995232] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 01:12:35.624 [2024-12-09 06:11:29.995537] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:36.560 [2024-12-09 06:11:30.858769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:36.560 Malloc0 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:36.560 [2024-12-09 06:11:30.926922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:12:36.560 { 01:12:36.560 "params": { 01:12:36.560 "name": "Nvme$subsystem", 01:12:36.560 "trtype": "$TEST_TRANSPORT", 01:12:36.560 "traddr": "$NVMF_FIRST_TARGET_IP", 01:12:36.560 "adrfam": "ipv4", 01:12:36.560 "trsvcid": "$NVMF_PORT", 01:12:36.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:12:36.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:12:36.560 "hdgst": ${hdgst:-false}, 01:12:36.560 "ddgst": ${ddgst:-false} 01:12:36.560 }, 01:12:36.560 "method": "bdev_nvme_attach_controller" 01:12:36.560 } 01:12:36.560 EOF 01:12:36.560 )") 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 01:12:36.560 06:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:12:36.560 "params": { 01:12:36.560 "name": "Nvme1", 01:12:36.560 "trtype": "tcp", 01:12:36.560 "traddr": "10.0.0.3", 01:12:36.560 "adrfam": "ipv4", 01:12:36.560 "trsvcid": "4420", 01:12:36.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:12:36.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:12:36.560 "hdgst": false, 01:12:36.560 "ddgst": false 01:12:36.560 }, 01:12:36.560 "method": "bdev_nvme_attach_controller" 01:12:36.560 }' 01:12:36.560 [2024-12-09 06:11:30.990188] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:12:36.560 [2024-12-09 06:11:30.990288] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105814 ] 01:12:36.819 [2024-12-09 06:11:31.146382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 01:12:36.819 [2024-12-09 06:11:31.187241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:36.819 [2024-12-09 06:11:31.187365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:12:36.819 [2024-12-09 06:11:31.187372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:12:36.819 I/O targets: 01:12:36.819 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 01:12:36.819 01:12:36.819 01:12:36.819 CUnit - A unit testing framework for C - Version 2.1-3 01:12:36.819 http://cunit.sourceforge.net/ 01:12:36.819 01:12:36.819 01:12:36.819 Suite: bdevio tests on: Nvme1n1 01:12:36.819 Test: blockdev write read block ...passed 01:12:37.078 Test: blockdev write zeroes read block ...passed 01:12:37.078 Test: blockdev write zeroes read no split ...passed 01:12:37.078 Test: blockdev write zeroes read split ...passed 01:12:37.078 Test: blockdev write zeroes read split partial ...passed 01:12:37.078 Test: blockdev reset ...[2024-12-09 06:11:31.438767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 01:12:37.078 [2024-12-09 06:11:31.438905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2263f50 (9): Bad file descriptor 01:12:37.078 [2024-12-09 06:11:31.442418] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 01:12:37.078 passed 01:12:37.078 Test: blockdev write read 8 blocks ...passed 01:12:37.078 Test: blockdev write read size > 128k ...passed 01:12:37.078 Test: blockdev write read invalid size ...passed 01:12:37.078 Test: blockdev write read offset + nbytes == size of blockdev ...passed 01:12:37.078 Test: blockdev write read offset + nbytes > size of blockdev ...passed 01:12:37.078 Test: blockdev write read max offset ...passed 01:12:37.078 Test: blockdev write read 2 blocks on overlapped address offset ...passed 01:12:37.078 Test: blockdev writev readv 8 blocks ...passed 01:12:37.078 Test: blockdev writev readv 30 x 1block ...passed 01:12:37.078 Test: blockdev writev readv block ...passed 01:12:37.078 Test: blockdev writev readv size > 128k ...passed 01:12:37.078 Test: blockdev writev readv size > 128k in two iovs ...passed 01:12:37.078 Test: blockdev comparev and writev ...[2024-12-09 06:11:31.618800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:12:37.078 [2024-12-09 06:11:31.618860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 01:12:37.078 [2024-12-09 06:11:31.618882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:12:37.078 [2024-12-09 06:11:31.618893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 01:12:37.078 [2024-12-09 06:11:31.619602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:12:37.078 [2024-12-09 06:11:31.619635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 01:12:37.078 [2024-12-09 06:11:31.619667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:12:37.078 [2024-12-09 06:11:31.619680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 01:12:37.078 [2024-12-09 06:11:31.620042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:12:37.078 [2024-12-09 06:11:31.620067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 01:12:37.078 [2024-12-09 06:11:31.620084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:12:37.078 [2024-12-09 06:11:31.620095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 01:12:37.078 [2024-12-09 06:11:31.620498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:12:37.078 [2024-12-09 06:11:31.620528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 01:12:37.078 [2024-12-09 06:11:31.620546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 01:12:37.078 [2024-12-09 06:11:31.620557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 01:12:37.078 passed 01:12:37.338 Test: blockdev nvme passthru rw ...passed 01:12:37.338 Test: blockdev nvme passthru vendor specific ...passed 01:12:37.338 Test: blockdev nvme admin passthru ...[2024-12-09 06:11:31.706992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:12:37.338 [2024-12-09 06:11:31.707119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 01:12:37.338 [2024-12-09 06:11:31.707269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:12:37.338 [2024-12-09 06:11:31.707287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 01:12:37.338 [2024-12-09 06:11:31.707408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:12:37.338 [2024-12-09 06:11:31.707425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 01:12:37.338 [2024-12-09 06:11:31.707549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 01:12:37.338 [2024-12-09 06:11:31.707566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 01:12:37.338 passed 01:12:37.338 Test: blockdev copy ...passed 01:12:37.338 01:12:37.338 Run Summary: Type Total Ran Passed Failed Inactive 01:12:37.338 suites 1 1 n/a 0 0 01:12:37.338 tests 23 23 23 0 0 01:12:37.338 asserts 152 152 152 0 n/a 01:12:37.338 01:12:37.338 Elapsed time = 0.862 seconds 01:12:37.338 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:12:37.338 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:37.338 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:37.338 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:37.338 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 01:12:37.338 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 01:12:37.338 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 01:12:37.338 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 01:12:37.597 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:12:37.597 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 01:12:37.597 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 01:12:37.597 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:12:37.597 rmmod nvme_tcp 01:12:37.597 rmmod nvme_fabrics 01:12:37.597 rmmod nvme_keyring 01:12:37.597 06:11:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 105760 ']' 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 105760 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 105760 ']' 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 105760 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105760 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105760' 01:12:37.597 killing process with pid 105760 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 105760 01:12:37.597 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 105760 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:37.856 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 01:12:38.115 01:12:38.115 real 0m3.375s 01:12:38.115 user 0m6.652s 01:12:38.115 sys 0m1.130s 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 01:12:38.115 ************************************ 01:12:38.115 END TEST nvmf_bdevio 01:12:38.115 ************************************ 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 01:12:38.115 01:12:38.115 real 3m32.273s 01:12:38.115 user 9m35.091s 01:12:38.115 sys 1m20.308s 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:38.115 06:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 01:12:38.115 ************************************ 01:12:38.115 END TEST nvmf_target_core_interrupt_mode 01:12:38.115 ************************************ 01:12:38.115 06:11:32 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 01:12:38.115 06:11:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 01:12:38.115 06:11:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:38.115 06:11:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:12:38.115 ************************************ 01:12:38.115 START TEST nvmf_interrupt 01:12:38.115 ************************************ 01:12:38.115 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 01:12:38.115 * Looking for test storage... 01:12:38.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:12:38.115 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:12:38.115 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 01:12:38.115 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:12:38.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:38.375 --rc genhtml_branch_coverage=1 01:12:38.375 --rc genhtml_function_coverage=1 01:12:38.375 --rc genhtml_legend=1 01:12:38.375 --rc geninfo_all_blocks=1 01:12:38.375 --rc geninfo_unexecuted_blocks=1 01:12:38.375 01:12:38.375 ' 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:12:38.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:38.375 --rc genhtml_branch_coverage=1 01:12:38.375 --rc genhtml_function_coverage=1 01:12:38.375 --rc genhtml_legend=1 01:12:38.375 --rc geninfo_all_blocks=1 01:12:38.375 --rc geninfo_unexecuted_blocks=1 01:12:38.375 01:12:38.375 ' 01:12:38.375 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:12:38.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:38.375 --rc genhtml_branch_coverage=1 01:12:38.375 --rc genhtml_function_coverage=1 01:12:38.375 --rc genhtml_legend=1 01:12:38.375 --rc geninfo_all_blocks=1 01:12:38.376 --rc geninfo_unexecuted_blocks=1 01:12:38.376 01:12:38.376 ' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:12:38.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:38.376 --rc genhtml_branch_coverage=1 01:12:38.376 --rc genhtml_function_coverage=1 01:12:38.376 --rc genhtml_legend=1 01:12:38.376 --rc geninfo_all_blocks=1 01:12:38.376 --rc geninfo_unexecuted_blocks=1 01:12:38.376 01:12:38.376 ' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:12:38.376 Cannot find device "nvmf_init_br" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:12:38.376 Cannot find device "nvmf_init_br2" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:12:38.376 Cannot find device "nvmf_tgt_br" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:12:38.376 Cannot find device "nvmf_tgt_br2" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:12:38.376 Cannot find device "nvmf_init_br" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:12:38.376 Cannot find device "nvmf_init_br2" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:12:38.376 Cannot find device "nvmf_tgt_br" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:12:38.376 Cannot find device "nvmf_tgt_br2" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:12:38.376 Cannot find device "nvmf_br" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:12:38.376 Cannot find device "nvmf_init_if" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:12:38.376 Cannot find device "nvmf_init_if2" 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 01:12:38.376 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:38.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:38.377 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 01:12:38.377 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:38.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:12:38.377 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 01:12:38.377 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:12:38.377 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:12:38.377 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:12:38.377 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:12:38.377 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:12:38.636 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:12:38.636 06:11:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:12:38.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:12:38.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 01:12:38.636 01:12:38.636 --- 10.0.0.3 ping statistics --- 01:12:38.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:38.636 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:12:38.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:12:38.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 01:12:38.636 01:12:38.636 --- 10.0.0.4 ping statistics --- 01:12:38.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:38.636 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:12:38.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:12:38.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 01:12:38.636 01:12:38.636 --- 10.0.0.1 ping statistics --- 01:12:38.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:38.636 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:12:38.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:12:38.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 01:12:38.636 01:12:38.636 --- 10.0.0.2 ping statistics --- 01:12:38.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:12:38.636 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:12:38.636 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:38.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=106059 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 106059 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 106059 ']' 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:38.895 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:38.895 [2024-12-09 06:11:33.306326] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 01:12:38.895 [2024-12-09 06:11:33.307604] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:12:38.895 [2024-12-09 06:11:33.307704] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:12:38.895 [2024-12-09 06:11:33.454779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:12:39.155 [2024-12-09 06:11:33.486192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:12:39.155 [2024-12-09 06:11:33.486265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:12:39.155 [2024-12-09 06:11:33.486276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:12:39.155 [2024-12-09 06:11:33.486283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:12:39.155 [2024-12-09 06:11:33.486290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:12:39.155 [2024-12-09 06:11:33.487085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:39.155 [2024-12-09 06:11:33.487098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:12:39.155 [2024-12-09 06:11:33.537187] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 01:12:39.155 [2024-12-09 06:11:33.537423] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 01:12:39.155 [2024-12-09 06:11:33.537503] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 01:12:39.155 5000+0 records in 01:12:39.155 5000+0 records out 01:12:39.155 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0328609 s, 312 MB/s 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:39.155 AIO0 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:39.155 [2024-12-09 06:11:33.691998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:39.155 [2024-12-09 06:11:33.720342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106059 0 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106059 0 idle 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106059 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:39.155 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106059 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.21 reactor_0' 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106059 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.21 reactor_0 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106059 1 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106059 1 idle 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106059 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:39.414 06:11:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106070 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1' 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106070 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.00 reactor_1 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=106115 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106059 0 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106059 0 busy 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106059 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106059 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.21 reactor_0' 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106059 root 20 0 64.2g 45056 32768 S 0.0 0.4 0:00.21 reactor_0 01:12:39.673 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:39.932 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:12:39.932 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:12:39.932 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:12:39.932 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:12:39.932 06:11:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106059 root 20 0 64.2g 46336 33152 R 99.9 0.4 0:01.61 reactor_0' 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106059 root 20 0 64.2g 46336 33152 R 99.9 0.4 0:01.61 reactor_0 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106059 1 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106059 1 busy 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106059 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:40.868 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106070 root 20 0 64.2g 46336 33152 R 62.5 0.4 0:00.82 reactor_1' 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106070 root 20 0 64.2g 46336 33152 R 62.5 0.4 0:00.82 reactor_1 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=62.5 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=62 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:12:41.126 06:11:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 106115 01:12:51.094 Initializing NVMe Controllers 01:12:51.094 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 01:12:51.094 Controller IO queue size 256, less than required. 01:12:51.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 01:12:51.094 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 01:12:51.094 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 01:12:51.094 Initialization complete. Launching workers. 01:12:51.094 ======================================================== 01:12:51.094 Latency(us) 01:12:51.094 Device Information : IOPS MiB/s Average min max 01:12:51.094 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6966.30 27.21 36806.03 5956.79 73089.61 01:12:51.094 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 7031.00 27.46 36463.27 6304.50 89189.82 01:12:51.094 ======================================================== 01:12:51.094 Total : 13997.30 54.68 36633.86 5956.79 89189.82 01:12:51.094 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106059 0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106059 0 idle 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106059 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106059 root 20 0 64.2g 46336 33152 S 6.7 0.4 0:13.57 reactor_0' 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106059 root 20 0 64.2g 46336 33152 S 6.7 0.4 0:13.57 reactor_0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106059 1 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106059 1 idle 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106059 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106070 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.67 reactor_1' 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106070 root 20 0 64.2g 46336 33152 S 0.0 0.4 0:06.67 reactor_1 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 01:12:51.094 06:11:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106059 0 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106059 0 idle 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106059 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106059 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:13.62 reactor_0' 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106059 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:13.62 reactor_0 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106059 1 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106059 1 idle 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106059 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106059 -w 256 01:12:52.472 06:11:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106070 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.69 reactor_1' 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106070 root 20 0 64.2g 48640 33152 S 0.0 0.4 0:06.69 reactor_1 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 01:12:52.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 01:12:52.731 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 01:12:52.732 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 01:12:52.732 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 01:12:52.732 06:11:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 01:12:52.732 06:11:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 01:12:52.732 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 01:12:52.732 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:12:52.991 rmmod nvme_tcp 01:12:52.991 rmmod nvme_fabrics 01:12:52.991 rmmod nvme_keyring 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 106059 ']' 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 106059 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 106059 ']' 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 106059 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106059 01:12:52.991 killing process with pid 106059 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106059' 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 106059 01:12:52.991 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 106059 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:12:53.250 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 01:12:53.529 01:12:53.529 real 0m15.341s 01:12:53.529 user 0m27.852s 01:12:53.529 sys 0m7.411s 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:53.529 ************************************ 01:12:53.529 END TEST nvmf_interrupt 01:12:53.529 ************************************ 01:12:53.529 06:11:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 01:12:53.529 ************************************ 01:12:53.529 END TEST nvmf_tcp 01:12:53.529 ************************************ 01:12:53.529 01:12:53.529 real 19m45.807s 01:12:53.529 user 52m10.884s 01:12:53.529 sys 4m51.824s 01:12:53.529 06:11:47 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:12:53.529 06:11:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:12:53.529 06:11:47 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 01:12:53.529 06:11:47 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:12:53.529 06:11:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:12:53.529 06:11:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:12:53.529 06:11:47 -- common/autotest_common.sh@10 -- # set +x 01:12:53.529 ************************************ 01:12:53.529 START TEST spdkcli_nvmf_tcp 01:12:53.529 ************************************ 01:12:53.529 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 01:12:53.529 * Looking for test storage... 01:12:53.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 01:12:53.529 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:12:53.529 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 01:12:53.529 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:12:53.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:53.788 --rc genhtml_branch_coverage=1 01:12:53.788 --rc genhtml_function_coverage=1 01:12:53.788 --rc genhtml_legend=1 01:12:53.788 --rc geninfo_all_blocks=1 01:12:53.788 --rc geninfo_unexecuted_blocks=1 01:12:53.788 01:12:53.788 ' 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:12:53.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:53.788 --rc genhtml_branch_coverage=1 01:12:53.788 --rc genhtml_function_coverage=1 01:12:53.788 --rc genhtml_legend=1 01:12:53.788 --rc geninfo_all_blocks=1 01:12:53.788 --rc geninfo_unexecuted_blocks=1 01:12:53.788 01:12:53.788 ' 01:12:53.788 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:12:53.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:53.788 --rc genhtml_branch_coverage=1 01:12:53.788 --rc genhtml_function_coverage=1 01:12:53.788 --rc genhtml_legend=1 01:12:53.788 --rc geninfo_all_blocks=1 01:12:53.789 --rc geninfo_unexecuted_blocks=1 01:12:53.789 01:12:53.789 ' 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:12:53.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:12:53.789 --rc genhtml_branch_coverage=1 01:12:53.789 --rc genhtml_function_coverage=1 01:12:53.789 --rc genhtml_legend=1 01:12:53.789 --rc geninfo_all_blocks=1 01:12:53.789 --rc geninfo_unexecuted_blocks=1 01:12:53.789 01:12:53.789 ' 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:12:53.789 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:12:53.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=106451 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 106451 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 106451 ']' 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 01:12:53.789 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:12:53.789 [2024-12-09 06:11:48.292069] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:12:53.789 [2024-12-09 06:11:48.292446] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106451 ] 01:12:54.093 [2024-12-09 06:11:48.446217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 01:12:54.093 [2024-12-09 06:11:48.488599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:12:54.093 [2024-12-09 06:11:48.488616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:12:54.093 06:11:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 01:12:54.093 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 01:12:54.093 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 01:12:54.093 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 01:12:54.093 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 01:12:54.093 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 01:12:54.093 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 01:12:54.093 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:12:54.093 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:12:54.093 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 01:12:54.093 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 01:12:54.093 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 01:12:54.093 ' 01:12:57.376 [2024-12-09 06:11:51.426008] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:12:58.312 [2024-12-09 06:11:52.748150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 01:13:00.848 [2024-12-09 06:11:55.181929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 01:13:02.747 [2024-12-09 06:11:57.303543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 01:13:04.649 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 01:13:04.649 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 01:13:04.649 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 01:13:04.649 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 01:13:04.649 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 01:13:04.649 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 01:13:04.649 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 01:13:04.649 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:13:04.649 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:13:04.649 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:13:04.649 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 01:13:04.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 01:13:04.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 01:13:04.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 01:13:04.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 01:13:04.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 01:13:04.650 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 01:13:04.650 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 01:13:04.650 06:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 01:13:04.650 06:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:04.650 06:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:13:04.650 06:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 01:13:04.650 06:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:04.650 06:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:13:04.650 06:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 01:13:04.650 06:11:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:13:05.217 06:11:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 01:13:05.217 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 01:13:05.217 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:13:05.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 01:13:05.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 01:13:05.218 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 01:13:05.218 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 01:13:05.218 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 01:13:05.218 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 01:13:05.218 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 01:13:05.218 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 01:13:05.218 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 01:13:05.218 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 01:13:05.218 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 01:13:05.218 ' 01:13:11.824 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 01:13:11.824 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 01:13:11.824 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 01:13:11.824 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 01:13:11.824 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 01:13:11.824 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 01:13:11.824 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 01:13:11.824 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 01:13:11.824 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 01:13:11.824 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 01:13:11.824 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 01:13:11.824 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 01:13:11.824 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 01:13:11.824 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 106451 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106451 ']' 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106451 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106451 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106451' 01:13:11.824 killing process with pid 106451 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 106451 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 106451 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 01:13:11.824 Process with pid 106451 is not found 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 106451 ']' 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 106451 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106451 ']' 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106451 01:13:11.824 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (106451) - No such process 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 106451 is not found' 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 01:13:11.824 ************************************ 01:13:11.824 END TEST spdkcli_nvmf_tcp 01:13:11.824 ************************************ 01:13:11.824 01:13:11.824 real 0m17.530s 01:13:11.824 user 0m38.321s 01:13:11.824 sys 0m0.824s 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:11.824 06:12:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 01:13:11.824 06:12:05 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:13:11.824 06:12:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:13:11.824 06:12:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:11.824 06:12:05 -- common/autotest_common.sh@10 -- # set +x 01:13:11.825 ************************************ 01:13:11.825 START TEST nvmf_identify_passthru 01:13:11.825 ************************************ 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 01:13:11.825 * Looking for test storage... 01:13:11.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:13:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:11.825 --rc genhtml_branch_coverage=1 01:13:11.825 --rc genhtml_function_coverage=1 01:13:11.825 --rc genhtml_legend=1 01:13:11.825 --rc geninfo_all_blocks=1 01:13:11.825 --rc geninfo_unexecuted_blocks=1 01:13:11.825 01:13:11.825 ' 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:13:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:11.825 --rc genhtml_branch_coverage=1 01:13:11.825 --rc genhtml_function_coverage=1 01:13:11.825 --rc genhtml_legend=1 01:13:11.825 --rc geninfo_all_blocks=1 01:13:11.825 --rc geninfo_unexecuted_blocks=1 01:13:11.825 01:13:11.825 ' 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:13:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:11.825 --rc genhtml_branch_coverage=1 01:13:11.825 --rc genhtml_function_coverage=1 01:13:11.825 --rc genhtml_legend=1 01:13:11.825 --rc geninfo_all_blocks=1 01:13:11.825 --rc geninfo_unexecuted_blocks=1 01:13:11.825 01:13:11.825 ' 01:13:11.825 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:13:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:11.825 --rc genhtml_branch_coverage=1 01:13:11.825 --rc genhtml_function_coverage=1 01:13:11.825 --rc genhtml_legend=1 01:13:11.825 --rc geninfo_all_blocks=1 01:13:11.825 --rc geninfo_unexecuted_blocks=1 01:13:11.825 01:13:11.825 ' 01:13:11.825 06:12:05 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:11.825 06:12:05 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:11.825 06:12:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:11.825 06:12:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:11.825 06:12:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:11.825 06:12:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:13:11.825 06:12:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:13:11.825 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:13:11.825 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 01:13:11.826 06:12:05 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:13:11.826 06:12:05 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 01:13:11.826 06:12:05 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:13:11.826 06:12:05 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:11.826 06:12:05 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:11.826 06:12:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:11.826 06:12:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:11.826 06:12:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:11.826 06:12:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 01:13:11.826 06:12:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:11.826 06:12:05 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:11.826 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:13:11.826 06:12:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:13:11.826 Cannot find device "nvmf_init_br" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:13:11.826 Cannot find device "nvmf_init_br2" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:13:11.826 Cannot find device "nvmf_tgt_br" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:13:11.826 Cannot find device "nvmf_tgt_br2" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:13:11.826 Cannot find device "nvmf_init_br" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:13:11.826 Cannot find device "nvmf_init_br2" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:13:11.826 Cannot find device "nvmf_tgt_br" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:13:11.826 Cannot find device "nvmf_tgt_br2" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:13:11.826 Cannot find device "nvmf_br" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:13:11.826 Cannot find device "nvmf_init_if" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:13:11.826 Cannot find device "nvmf_init_if2" 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:13:11.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:13:11.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:13:11.826 06:12:05 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:13:11.826 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:13:11.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:13:11.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 01:13:11.827 01:13:11.827 --- 10.0.0.3 ping statistics --- 01:13:11.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:11.827 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:13:11.827 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:13:11.827 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 01:13:11.827 01:13:11.827 --- 10.0.0.4 ping statistics --- 01:13:11.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:11.827 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:13:11.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:13:11.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 01:13:11.827 01:13:11.827 --- 10.0.0.1 ping statistics --- 01:13:11.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:11.827 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:13:11.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:13:11.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 01:13:11.827 01:13:11.827 --- 10.0.0.2 ping statistics --- 01:13:11.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:11.827 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:13:11.827 06:12:06 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:13:11.827 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:11.827 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:13:11.827 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 01:13:11.827 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 01:13:11.827 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 01:13:11.827 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:13:11.827 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 01:13:11.827 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 01:13:12.086 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 01:13:12.086 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 01:13:12.086 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 01:13:12.086 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 01:13:12.345 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 01:13:12.345 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 01:13:12.345 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:12.345 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:12.345 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 01:13:12.345 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:12.346 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:12.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:13:12.346 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=106956 01:13:12.346 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 01:13:12.346 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 01:13:12.346 06:12:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 106956 01:13:12.346 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 106956 ']' 01:13:12.346 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:13:12.346 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:12.346 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:13:12.346 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:12.346 06:12:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:12.346 [2024-12-09 06:12:06.843174] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:13:12.346 [2024-12-09 06:12:06.843795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:13:12.604 [2024-12-09 06:12:06.996252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:13:12.605 [2024-12-09 06:12:07.031844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:13:12.605 [2024-12-09 06:12:07.032350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:13:12.605 [2024-12-09 06:12:07.032446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:13:12.605 [2024-12-09 06:12:07.032521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:13:12.605 [2024-12-09 06:12:07.032599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:13:12.605 [2024-12-09 06:12:07.033579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:13:12.605 [2024-12-09 06:12:07.034150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:13:12.605 [2024-12-09 06:12:07.034262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:13:12.605 [2024-12-09 06:12:07.034267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 01:13:13.541 06:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:13.541 06:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 [2024-12-09 06:12:07.934633] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:13.541 06:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 [2024-12-09 06:12:07.948122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:13.541 06:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 06:12:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:13.541 06:12:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 Nvme0n1 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:13.541 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:13.541 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:13.541 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 [2024-12-09 06:12:08.092936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:13.541 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:13.541 [ 01:13:13.541 { 01:13:13.541 "allow_any_host": true, 01:13:13.541 "hosts": [], 01:13:13.541 "listen_addresses": [], 01:13:13.541 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 01:13:13.541 "subtype": "Discovery" 01:13:13.541 }, 01:13:13.541 { 01:13:13.541 "allow_any_host": true, 01:13:13.541 "hosts": [], 01:13:13.541 "listen_addresses": [ 01:13:13.541 { 01:13:13.541 "adrfam": "IPv4", 01:13:13.541 "traddr": "10.0.0.3", 01:13:13.541 "trsvcid": "4420", 01:13:13.541 "trtype": "TCP" 01:13:13.541 } 01:13:13.541 ], 01:13:13.541 "max_cntlid": 65519, 01:13:13.541 "max_namespaces": 1, 01:13:13.541 "min_cntlid": 1, 01:13:13.541 "model_number": "SPDK bdev Controller", 01:13:13.541 "namespaces": [ 01:13:13.541 { 01:13:13.541 "bdev_name": "Nvme0n1", 01:13:13.541 "name": "Nvme0n1", 01:13:13.541 "nguid": "47447D0DAAF749C5B986A546D3DA789B", 01:13:13.541 "nsid": 1, 01:13:13.541 "uuid": "47447d0d-aaf7-49c5-b986-a546d3da789b" 01:13:13.541 } 01:13:13.541 ], 01:13:13.541 "nqn": "nqn.2016-06.io.spdk:cnode1", 01:13:13.541 "serial_number": "SPDK00000000000001", 01:13:13.541 "subtype": "NVMe" 01:13:13.541 } 01:13:13.541 ] 01:13:13.541 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:13.541 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:13:13.541 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 01:13:13.541 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 01:13:13.800 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 01:13:13.800 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 01:13:13.800 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 01:13:13.800 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 01:13:14.058 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 01:13:14.058 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 01:13:14.058 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 01:13:14.058 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:13:14.058 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:14.058 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:14.058 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:14.058 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 01:13:14.058 06:12:08 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 01:13:14.058 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 01:13:14.058 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:13:14.317 rmmod nvme_tcp 01:13:14.317 rmmod nvme_fabrics 01:13:14.317 rmmod nvme_keyring 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 106956 ']' 01:13:14.317 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 106956 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 106956 ']' 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 106956 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106956 01:13:14.317 killing process with pid 106956 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106956' 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 106956 01:13:14.317 06:12:08 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 106956 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:13:14.577 06:12:08 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:13:14.578 06:12:09 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:13:14.578 06:12:09 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:13:14.578 06:12:09 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:13:14.578 06:12:09 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:13:14.578 06:12:09 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 01:13:14.578 06:12:09 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:14.578 06:12:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:13:14.578 06:12:09 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:14.578 06:12:09 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 01:13:14.578 ************************************ 01:13:14.578 END TEST nvmf_identify_passthru 01:13:14.578 ************************************ 01:13:14.578 01:13:14.578 real 0m3.560s 01:13:14.578 user 0m8.206s 01:13:14.578 sys 0m0.904s 01:13:14.578 06:12:09 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:14.578 06:12:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 01:13:14.837 06:12:09 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:13:14.837 06:12:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:13:14.837 06:12:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:14.837 06:12:09 -- common/autotest_common.sh@10 -- # set +x 01:13:14.837 ************************************ 01:13:14.837 START TEST nvmf_dif 01:13:14.837 ************************************ 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 01:13:14.837 * Looking for test storage... 01:13:14.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@345 -- # : 1 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@353 -- # local d=1 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@355 -- # echo 1 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@353 -- # local d=2 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@355 -- # echo 2 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:13:14.837 06:12:09 nvmf_dif -- scripts/common.sh@368 -- # return 0 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:13:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:14.837 --rc genhtml_branch_coverage=1 01:13:14.837 --rc genhtml_function_coverage=1 01:13:14.837 --rc genhtml_legend=1 01:13:14.837 --rc geninfo_all_blocks=1 01:13:14.837 --rc geninfo_unexecuted_blocks=1 01:13:14.837 01:13:14.837 ' 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:13:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:14.837 --rc genhtml_branch_coverage=1 01:13:14.837 --rc genhtml_function_coverage=1 01:13:14.837 --rc genhtml_legend=1 01:13:14.837 --rc geninfo_all_blocks=1 01:13:14.837 --rc geninfo_unexecuted_blocks=1 01:13:14.837 01:13:14.837 ' 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:13:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:14.837 --rc genhtml_branch_coverage=1 01:13:14.837 --rc genhtml_function_coverage=1 01:13:14.837 --rc genhtml_legend=1 01:13:14.837 --rc geninfo_all_blocks=1 01:13:14.837 --rc geninfo_unexecuted_blocks=1 01:13:14.837 01:13:14.837 ' 01:13:14.837 06:12:09 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:13:14.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:13:14.837 --rc genhtml_branch_coverage=1 01:13:14.837 --rc genhtml_function_coverage=1 01:13:14.837 --rc genhtml_legend=1 01:13:14.837 --rc geninfo_all_blocks=1 01:13:14.837 --rc geninfo_unexecuted_blocks=1 01:13:14.837 01:13:14.837 ' 01:13:14.837 06:12:09 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:13:14.838 06:12:09 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 01:13:14.838 06:12:09 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:13:14.838 06:12:09 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:13:14.838 06:12:09 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:13:14.838 06:12:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:14.838 06:12:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:14.838 06:12:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:14.838 06:12:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 01:13:14.838 06:12:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@51 -- # : 0 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:13:14.838 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 01:13:14.838 06:12:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 01:13:14.838 06:12:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 01:13:14.838 06:12:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 01:13:14.838 06:12:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 01:13:14.838 06:12:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 01:13:14.838 06:12:09 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 01:13:15.097 06:12:09 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 01:13:15.097 06:12:09 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:13:15.098 06:12:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:13:15.098 06:12:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:13:15.098 Cannot find device "nvmf_init_br" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@162 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:13:15.098 Cannot find device "nvmf_init_br2" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@163 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:13:15.098 Cannot find device "nvmf_tgt_br" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@164 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:13:15.098 Cannot find device "nvmf_tgt_br2" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@165 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:13:15.098 Cannot find device "nvmf_init_br" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@166 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:13:15.098 Cannot find device "nvmf_init_br2" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@167 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:13:15.098 Cannot find device "nvmf_tgt_br" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@168 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:13:15.098 Cannot find device "nvmf_tgt_br2" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@169 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:13:15.098 Cannot find device "nvmf_br" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@170 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:13:15.098 Cannot find device "nvmf_init_if" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@171 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:13:15.098 Cannot find device "nvmf_init_if2" 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@172 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:13:15.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@173 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:13:15.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@174 -- # true 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:13:15.098 06:12:09 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:13:15.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:13:15.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 01:13:15.358 01:13:15.358 --- 10.0.0.3 ping statistics --- 01:13:15.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:15.358 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:13:15.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:13:15.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 01:13:15.358 01:13:15.358 --- 10.0.0.4 ping statistics --- 01:13:15.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:15.358 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:13:15.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:13:15.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 01:13:15.358 01:13:15.358 --- 10.0.0.1 ping statistics --- 01:13:15.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:15.358 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:13:15.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:13:15.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 01:13:15.358 01:13:15.358 --- 10.0.0.2 ping statistics --- 01:13:15.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:13:15.358 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@461 -- # return 0 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:13:15.358 06:12:09 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:13:15.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:13:15.617 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 01:13:15.617 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 01:13:15.617 06:12:10 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:13:15.617 06:12:10 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:13:15.617 06:12:10 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:13:15.617 06:12:10 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:13:15.617 06:12:10 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:13:15.617 06:12:10 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:13:15.877 06:12:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:13:15.877 06:12:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:13:15.877 06:12:10 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:13:15.877 06:12:10 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 01:13:15.877 06:12:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:15.877 06:12:10 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=107358 01:13:15.877 06:12:10 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:13:15.877 06:12:10 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 107358 01:13:15.877 06:12:10 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 107358 ']' 01:13:15.877 06:12:10 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:13:15.877 06:12:10 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 01:13:15.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:13:15.877 06:12:10 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:13:15.877 06:12:10 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 01:13:15.877 06:12:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:15.877 [2024-12-09 06:12:10.308871] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:13:15.877 [2024-12-09 06:12:10.308972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:13:15.877 [2024-12-09 06:12:10.456185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:13:16.137 [2024-12-09 06:12:10.506131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:13:16.137 [2024-12-09 06:12:10.506217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:13:16.137 [2024-12-09 06:12:10.506257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:13:16.137 [2024-12-09 06:12:10.506271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:13:16.137 [2024-12-09 06:12:10.506283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:13:16.137 [2024-12-09 06:12:10.506726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 01:13:16.137 06:12:10 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:16.137 06:12:10 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:13:16.137 06:12:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:13:16.137 06:12:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:16.137 [2024-12-09 06:12:10.657484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:16.137 06:12:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:16.137 06:12:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:16.137 ************************************ 01:13:16.137 START TEST fio_dif_1_default 01:13:16.137 ************************************ 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:16.137 bdev_null0 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:16.137 [2024-12-09 06:12:10.705643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:16.137 { 01:13:16.137 "params": { 01:13:16.137 "name": "Nvme$subsystem", 01:13:16.137 "trtype": "$TEST_TRANSPORT", 01:13:16.137 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:16.137 "adrfam": "ipv4", 01:13:16.137 "trsvcid": "$NVMF_PORT", 01:13:16.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:16.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:16.137 "hdgst": ${hdgst:-false}, 01:13:16.137 "ddgst": ${ddgst:-false} 01:13:16.137 }, 01:13:16.137 "method": "bdev_nvme_attach_controller" 01:13:16.137 } 01:13:16.137 EOF 01:13:16.137 )") 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:13:16.137 06:12:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:13:16.396 "params": { 01:13:16.396 "name": "Nvme0", 01:13:16.396 "trtype": "tcp", 01:13:16.396 "traddr": "10.0.0.3", 01:13:16.396 "adrfam": "ipv4", 01:13:16.396 "trsvcid": "4420", 01:13:16.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:13:16.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:13:16.396 "hdgst": false, 01:13:16.396 "ddgst": false 01:13:16.396 }, 01:13:16.396 "method": "bdev_nvme_attach_controller" 01:13:16.396 }' 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:13:16.396 06:12:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:16.396 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:13:16.396 fio-3.35 01:13:16.396 Starting 1 thread 01:13:28.627 01:13:28.627 filename0: (groupid=0, jobs=1): err= 0: pid=107429: Mon Dec 9 06:12:21 2024 01:13:28.627 read: IOPS=253, BW=1012KiB/s (1037kB/s)(9.92MiB/10036msec) 01:13:28.627 slat (nsec): min=6514, max=72546, avg=9519.83, stdev=4372.61 01:13:28.627 clat (usec): min=423, max=42507, avg=15774.96, stdev=19656.32 01:13:28.627 lat (usec): min=431, max=42518, avg=15784.48, stdev=19656.13 01:13:28.627 clat percentiles (usec): 01:13:28.627 | 1.00th=[ 445], 5.00th=[ 461], 10.00th=[ 474], 20.00th=[ 486], 01:13:28.627 | 30.00th=[ 494], 40.00th=[ 506], 50.00th=[ 523], 60.00th=[ 562], 01:13:28.627 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 01:13:28.627 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 01:13:28.627 | 99.99th=[42730] 01:13:28.627 bw ( KiB/s): min= 768, max= 1792, per=100.00%, avg=1014.40, stdev=237.23, samples=20 01:13:28.627 iops : min= 192, max= 448, avg=253.60, stdev=59.31, samples=20 01:13:28.627 lat (usec) : 500=34.53%, 750=27.68% 01:13:28.627 lat (msec) : 4=0.16%, 50=37.64% 01:13:28.627 cpu : usr=91.75%, sys=7.79%, ctx=31, majf=0, minf=9 01:13:28.627 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:13:28.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:28.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:28.627 issued rwts: total=2540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:28.627 latency : target=0, window=0, percentile=100.00%, depth=4 01:13:28.627 01:13:28.627 Run status group 0 (all jobs): 01:13:28.627 READ: bw=1012KiB/s (1037kB/s), 1012KiB/s-1012KiB/s (1037kB/s-1037kB/s), io=9.92MiB (10.4MB), run=10036-10036msec 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:28.627 ************************************ 01:13:28.627 END TEST fio_dif_1_default 01:13:28.627 ************************************ 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.627 01:13:28.627 real 0m11.001s 01:13:28.627 user 0m9.873s 01:13:28.627 sys 0m1.009s 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:28.627 06:12:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:13:28.627 06:12:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:13:28.627 06:12:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:13:28.628 06:12:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 ************************************ 01:13:28.628 START TEST fio_dif_1_multi_subsystems 01:13:28.628 ************************************ 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 bdev_null0 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 [2024-12-09 06:12:21.760973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 bdev_null1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:28.628 { 01:13:28.628 "params": { 01:13:28.628 "name": "Nvme$subsystem", 01:13:28.628 "trtype": "$TEST_TRANSPORT", 01:13:28.628 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:28.628 "adrfam": "ipv4", 01:13:28.628 "trsvcid": "$NVMF_PORT", 01:13:28.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:28.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:28.628 "hdgst": ${hdgst:-false}, 01:13:28.628 "ddgst": ${ddgst:-false} 01:13:28.628 }, 01:13:28.628 "method": "bdev_nvme_attach_controller" 01:13:28.628 } 01:13:28.628 EOF 01:13:28.628 )") 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:28.628 { 01:13:28.628 "params": { 01:13:28.628 "name": "Nvme$subsystem", 01:13:28.628 "trtype": "$TEST_TRANSPORT", 01:13:28.628 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:28.628 "adrfam": "ipv4", 01:13:28.628 "trsvcid": "$NVMF_PORT", 01:13:28.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:28.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:28.628 "hdgst": ${hdgst:-false}, 01:13:28.628 "ddgst": ${ddgst:-false} 01:13:28.628 }, 01:13:28.628 "method": "bdev_nvme_attach_controller" 01:13:28.628 } 01:13:28.628 EOF 01:13:28.628 )") 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 01:13:28.628 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:13:28.628 "params": { 01:13:28.628 "name": "Nvme0", 01:13:28.628 "trtype": "tcp", 01:13:28.628 "traddr": "10.0.0.3", 01:13:28.628 "adrfam": "ipv4", 01:13:28.628 "trsvcid": "4420", 01:13:28.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:13:28.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:13:28.629 "hdgst": false, 01:13:28.629 "ddgst": false 01:13:28.629 }, 01:13:28.629 "method": "bdev_nvme_attach_controller" 01:13:28.629 },{ 01:13:28.629 "params": { 01:13:28.629 "name": "Nvme1", 01:13:28.629 "trtype": "tcp", 01:13:28.629 "traddr": "10.0.0.3", 01:13:28.629 "adrfam": "ipv4", 01:13:28.629 "trsvcid": "4420", 01:13:28.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:13:28.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:13:28.629 "hdgst": false, 01:13:28.629 "ddgst": false 01:13:28.629 }, 01:13:28.629 "method": "bdev_nvme_attach_controller" 01:13:28.629 }' 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:13:28.629 06:12:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:28.629 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:13:28.629 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:13:28.629 fio-3.35 01:13:28.629 Starting 2 threads 01:13:38.598 01:13:38.598 filename0: (groupid=0, jobs=1): err= 0: pid=107585: Mon Dec 9 06:12:32 2024 01:13:38.598 read: IOPS=142, BW=570KiB/s (584kB/s)(5712KiB/10022msec) 01:13:38.598 slat (nsec): min=7930, max=56969, avg=10449.75, stdev=4232.56 01:13:38.598 clat (usec): min=452, max=42489, avg=28038.76, stdev=18942.79 01:13:38.598 lat (usec): min=460, max=42499, avg=28049.21, stdev=18942.46 01:13:38.598 clat percentiles (usec): 01:13:38.598 | 1.00th=[ 461], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 510], 01:13:38.598 | 30.00th=[ 898], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:13:38.598 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 01:13:38.598 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 01:13:38.598 | 99.99th=[42730] 01:13:38.598 bw ( KiB/s): min= 448, max= 832, per=49.57%, avg=569.60, stdev=92.04, samples=20 01:13:38.598 iops : min= 112, max= 208, avg=142.40, stdev=23.01, samples=20 01:13:38.598 lat (usec) : 500=17.37%, 750=8.68%, 1000=5.81% 01:13:38.598 lat (msec) : 2=0.35%, 50=67.79% 01:13:38.598 cpu : usr=95.76%, sys=3.86%, ctx=24, majf=0, minf=0 01:13:38.598 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:13:38.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:38.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:38.598 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:38.598 latency : target=0, window=0, percentile=100.00%, depth=4 01:13:38.598 filename1: (groupid=0, jobs=1): err= 0: pid=107586: Mon Dec 9 06:12:32 2024 01:13:38.598 read: IOPS=144, BW=578KiB/s (592kB/s)(5792KiB/10023msec) 01:13:38.598 slat (nsec): min=7930, max=56495, avg=11065.68, stdev=4994.08 01:13:38.598 clat (usec): min=437, max=42513, avg=27651.25, stdev=19071.39 01:13:38.598 lat (usec): min=445, max=42524, avg=27662.32, stdev=19070.82 01:13:38.598 clat percentiles (usec): 01:13:38.598 | 1.00th=[ 457], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 515], 01:13:38.598 | 30.00th=[ 857], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:13:38.598 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 01:13:38.598 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 01:13:38.598 | 99.99th=[42730] 01:13:38.598 bw ( KiB/s): min= 384, max= 800, per=50.27%, avg=577.60, stdev=130.70, samples=20 01:13:38.598 iops : min= 96, max= 200, avg=144.40, stdev=32.67, samples=20 01:13:38.598 lat (usec) : 500=16.23%, 750=9.88%, 1000=6.77% 01:13:38.598 lat (msec) : 2=0.28%, 50=66.85% 01:13:38.598 cpu : usr=95.52%, sys=4.06%, ctx=10, majf=0, minf=9 01:13:38.598 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:13:38.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:38.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:38.598 issued rwts: total=1448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:38.598 latency : target=0, window=0, percentile=100.00%, depth=4 01:13:38.598 01:13:38.598 Run status group 0 (all jobs): 01:13:38.598 READ: bw=1148KiB/s (1175kB/s), 570KiB/s-578KiB/s (584kB/s-592kB/s), io=11.2MiB (11.8MB), run=10022-10023msec 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:38.598 ************************************ 01:13:38.598 END TEST fio_dif_1_multi_subsystems 01:13:38.598 ************************************ 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:38.598 01:13:38.598 real 0m11.148s 01:13:38.598 user 0m19.953s 01:13:38.598 sys 0m1.048s 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 01:13:38.598 06:12:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:13:38.598 06:12:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:13:38.598 06:12:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:13:38.598 06:12:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:13:38.598 06:12:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:13:38.598 ************************************ 01:13:38.598 START TEST fio_dif_rand_params 01:13:38.598 ************************************ 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:38.598 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:38.598 bdev_null0 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:38.599 [2024-12-09 06:12:32.948501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:38.599 { 01:13:38.599 "params": { 01:13:38.599 "name": "Nvme$subsystem", 01:13:38.599 "trtype": "$TEST_TRANSPORT", 01:13:38.599 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:38.599 "adrfam": "ipv4", 01:13:38.599 "trsvcid": "$NVMF_PORT", 01:13:38.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:38.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:38.599 "hdgst": ${hdgst:-false}, 01:13:38.599 "ddgst": ${ddgst:-false} 01:13:38.599 }, 01:13:38.599 "method": "bdev_nvme_attach_controller" 01:13:38.599 } 01:13:38.599 EOF 01:13:38.599 )") 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:13:38.599 "params": { 01:13:38.599 "name": "Nvme0", 01:13:38.599 "trtype": "tcp", 01:13:38.599 "traddr": "10.0.0.3", 01:13:38.599 "adrfam": "ipv4", 01:13:38.599 "trsvcid": "4420", 01:13:38.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:13:38.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:13:38.599 "hdgst": false, 01:13:38.599 "ddgst": false 01:13:38.599 }, 01:13:38.599 "method": "bdev_nvme_attach_controller" 01:13:38.599 }' 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:38.599 06:12:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:38.599 06:12:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:38.599 06:12:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:38.599 06:12:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:13:38.599 06:12:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:38.857 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:13:38.857 ... 01:13:38.857 fio-3.35 01:13:38.857 Starting 3 threads 01:13:45.432 01:13:45.432 filename0: (groupid=0, jobs=1): err= 0: pid=107736: Mon Dec 9 06:12:38 2024 01:13:45.432 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(134MiB/5001msec) 01:13:45.432 slat (nsec): min=3897, max=45248, avg=12632.63, stdev=4577.57 01:13:45.432 clat (usec): min=5737, max=55021, avg=13935.62, stdev=8738.61 01:13:45.432 lat (usec): min=5751, max=55046, avg=13948.25, stdev=8739.19 01:13:45.432 clat percentiles (usec): 01:13:45.432 | 1.00th=[ 6718], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[10552], 01:13:45.432 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 01:13:45.432 | 70.00th=[13435], 80.00th=[13960], 90.00th=[14353], 95.00th=[15533], 01:13:45.432 | 99.00th=[53740], 99.50th=[53740], 99.90th=[54264], 99.95th=[54789], 01:13:45.432 | 99.99th=[54789] 01:13:45.432 bw ( KiB/s): min=20224, max=30208, per=31.08%, avg=27221.33, stdev=3386.56, samples=9 01:13:45.432 iops : min= 158, max= 236, avg=212.67, stdev=26.46, samples=9 01:13:45.432 lat (msec) : 10=18.42%, 20=76.84%, 50=1.12%, 100=3.63% 01:13:45.432 cpu : usr=92.84%, sys=5.78%, ctx=37, majf=0, minf=0 01:13:45.432 IO depths : 1=6.9%, 2=93.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:13:45.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:45.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:45.432 issued rwts: total=1075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:45.432 latency : target=0, window=0, percentile=100.00%, depth=3 01:13:45.432 filename0: (groupid=0, jobs=1): err= 0: pid=107737: Mon Dec 9 06:12:38 2024 01:13:45.432 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(152MiB/5041msec) 01:13:45.432 slat (nsec): min=4711, max=32578, avg=12719.21, stdev=3099.78 01:13:45.432 clat (usec): min=5856, max=54027, avg=12418.16, stdev=8513.89 01:13:45.432 lat (usec): min=5867, max=54040, avg=12430.88, stdev=8514.17 01:13:45.432 clat percentiles (usec): 01:13:45.432 | 1.00th=[ 6456], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[ 9503], 01:13:45.432 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11338], 01:13:45.432 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12518], 95.00th=[13698], 01:13:45.432 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 01:13:45.432 | 99.99th=[54264] 01:13:45.432 bw ( KiB/s): min=24064, max=37632, per=35.46%, avg=31058.30, stdev=4753.02, samples=10 01:13:45.432 iops : min= 188, max= 294, avg=242.60, stdev=37.17, samples=10 01:13:45.432 lat (msec) : 10=25.41%, 20=70.15%, 50=1.07%, 100=3.37% 01:13:45.432 cpu : usr=92.66%, sys=5.89%, ctx=9, majf=0, minf=0 01:13:45.432 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:13:45.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:45.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:45.432 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:45.432 latency : target=0, window=0, percentile=100.00%, depth=3 01:13:45.432 filename0: (groupid=0, jobs=1): err= 0: pid=107738: Mon Dec 9 06:12:38 2024 01:13:45.432 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(145MiB/5003msec) 01:13:45.432 slat (nsec): min=8046, max=33974, avg=10718.47, stdev=3573.37 01:13:45.432 clat (usec): min=4410, max=19273, avg=12933.93, stdev=3547.04 01:13:45.432 lat (usec): min=4418, max=19288, avg=12944.65, stdev=3547.20 01:13:45.432 clat percentiles (usec): 01:13:45.432 | 1.00th=[ 4424], 5.00th=[ 4555], 10.00th=[ 8717], 20.00th=[ 9503], 01:13:45.432 | 30.00th=[10421], 40.00th=[14091], 50.00th=[14615], 60.00th=[15008], 01:13:45.432 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16319], 95.00th=[16581], 01:13:45.432 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 01:13:45.432 | 99.99th=[19268] 01:13:45.432 bw ( KiB/s): min=25344, max=34560, per=33.77%, avg=29574.10, stdev=3163.89, samples=10 01:13:45.432 iops : min= 198, max= 270, avg=231.00, stdev=24.70, samples=10 01:13:45.432 lat (msec) : 10=27.46%, 20=72.54% 01:13:45.432 cpu : usr=92.88%, sys=5.72%, ctx=58, majf=0, minf=0 01:13:45.432 IO depths : 1=31.5%, 2=68.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:13:45.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:45.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:45.432 issued rwts: total=1158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:45.432 latency : target=0, window=0, percentile=100.00%, depth=3 01:13:45.432 01:13:45.432 Run status group 0 (all jobs): 01:13:45.432 READ: bw=85.5MiB/s (89.7MB/s), 26.9MiB/s-30.2MiB/s (28.2MB/s-31.6MB/s), io=431MiB (452MB), run=5001-5041msec 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.432 bdev_null0 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.432 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 [2024-12-09 06:12:38.979490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 bdev_null1 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 bdev_null2 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:45.433 { 01:13:45.433 "params": { 01:13:45.433 "name": "Nvme$subsystem", 01:13:45.433 "trtype": "$TEST_TRANSPORT", 01:13:45.433 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:45.433 "adrfam": "ipv4", 01:13:45.433 "trsvcid": "$NVMF_PORT", 01:13:45.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:45.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:45.433 "hdgst": ${hdgst:-false}, 01:13:45.433 "ddgst": ${ddgst:-false} 01:13:45.433 }, 01:13:45.433 "method": "bdev_nvme_attach_controller" 01:13:45.433 } 01:13:45.433 EOF 01:13:45.433 )") 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:45.433 { 01:13:45.433 "params": { 01:13:45.433 "name": "Nvme$subsystem", 01:13:45.433 "trtype": "$TEST_TRANSPORT", 01:13:45.433 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:45.433 "adrfam": "ipv4", 01:13:45.433 "trsvcid": "$NVMF_PORT", 01:13:45.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:45.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:45.433 "hdgst": ${hdgst:-false}, 01:13:45.433 "ddgst": ${ddgst:-false} 01:13:45.433 }, 01:13:45.433 "method": "bdev_nvme_attach_controller" 01:13:45.433 } 01:13:45.433 EOF 01:13:45.433 )") 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:45.433 { 01:13:45.433 "params": { 01:13:45.433 "name": "Nvme$subsystem", 01:13:45.433 "trtype": "$TEST_TRANSPORT", 01:13:45.433 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:45.433 "adrfam": "ipv4", 01:13:45.433 "trsvcid": "$NVMF_PORT", 01:13:45.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:45.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:45.433 "hdgst": ${hdgst:-false}, 01:13:45.433 "ddgst": ${ddgst:-false} 01:13:45.433 }, 01:13:45.433 "method": "bdev_nvme_attach_controller" 01:13:45.433 } 01:13:45.433 EOF 01:13:45.433 )") 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:13:45.433 06:12:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:13:45.433 "params": { 01:13:45.433 "name": "Nvme0", 01:13:45.433 "trtype": "tcp", 01:13:45.433 "traddr": "10.0.0.3", 01:13:45.433 "adrfam": "ipv4", 01:13:45.433 "trsvcid": "4420", 01:13:45.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:13:45.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:13:45.434 "hdgst": false, 01:13:45.434 "ddgst": false 01:13:45.434 }, 01:13:45.434 "method": "bdev_nvme_attach_controller" 01:13:45.434 },{ 01:13:45.434 "params": { 01:13:45.434 "name": "Nvme1", 01:13:45.434 "trtype": "tcp", 01:13:45.434 "traddr": "10.0.0.3", 01:13:45.434 "adrfam": "ipv4", 01:13:45.434 "trsvcid": "4420", 01:13:45.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:13:45.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:13:45.434 "hdgst": false, 01:13:45.434 "ddgst": false 01:13:45.434 }, 01:13:45.434 "method": "bdev_nvme_attach_controller" 01:13:45.434 },{ 01:13:45.434 "params": { 01:13:45.434 "name": "Nvme2", 01:13:45.434 "trtype": "tcp", 01:13:45.434 "traddr": "10.0.0.3", 01:13:45.434 "adrfam": "ipv4", 01:13:45.434 "trsvcid": "4420", 01:13:45.434 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:13:45.434 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:13:45.434 "hdgst": false, 01:13:45.434 "ddgst": false 01:13:45.434 }, 01:13:45.434 "method": "bdev_nvme_attach_controller" 01:13:45.434 }' 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:13:45.434 06:12:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:45.434 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:13:45.434 ... 01:13:45.434 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:13:45.434 ... 01:13:45.434 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:13:45.434 ... 01:13:45.434 fio-3.35 01:13:45.434 Starting 24 threads 01:13:57.635 01:13:57.635 filename0: (groupid=0, jobs=1): err= 0: pid=107837: Mon Dec 9 06:12:50 2024 01:13:57.635 read: IOPS=219, BW=878KiB/s (899kB/s)(8808KiB/10032msec) 01:13:57.635 slat (usec): min=3, max=8025, avg=19.89, stdev=250.36 01:13:57.635 clat (msec): min=20, max=155, avg=72.68, stdev=22.00 01:13:57.635 lat (msec): min=20, max=155, avg=72.70, stdev=22.01 01:13:57.635 clat percentiles (msec): 01:13:57.635 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 58], 01:13:57.635 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 01:13:57.635 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 115], 01:13:57.635 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 157], 01:13:57.635 | 99.99th=[ 157] 01:13:57.635 bw ( KiB/s): min= 640, max= 1408, per=4.27%, avg=878.80, stdev=159.08, samples=20 01:13:57.635 iops : min= 160, max= 352, avg=219.70, stdev=39.77, samples=20 01:13:57.635 lat (msec) : 50=17.39%, 100=71.39%, 250=11.22% 01:13:57.635 cpu : usr=35.75%, sys=1.05%, ctx=1133, majf=0, minf=9 01:13:57.635 IO depths : 1=1.5%, 2=3.2%, 4=11.0%, 8=72.0%, 16=12.3%, 32=0.0%, >=64=0.0% 01:13:57.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.635 complete : 0=0.0%, 4=90.3%, 8=5.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.635 issued rwts: total=2202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.635 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.635 filename0: (groupid=0, jobs=1): err= 0: pid=107838: Mon Dec 9 06:12:50 2024 01:13:57.635 read: IOPS=233, BW=933KiB/s (956kB/s)(9400KiB/10072msec) 01:13:57.635 slat (usec): min=3, max=8055, avg=21.01, stdev=166.20 01:13:57.635 clat (usec): min=1750, max=156067, avg=68307.42, stdev=29250.14 01:13:57.635 lat (usec): min=1759, max=156113, avg=68328.44, stdev=29252.14 01:13:57.635 clat percentiles (msec): 01:13:57.635 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 23], 20.00th=[ 48], 01:13:57.635 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 01:13:57.635 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 107], 95.00th=[ 110], 01:13:57.636 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 01:13:57.636 | 99.99th=[ 157] 01:13:57.636 bw ( KiB/s): min= 720, max= 3009, per=4.54%, avg=933.25, stdev=495.42, samples=20 01:13:57.636 iops : min= 180, max= 752, avg=233.30, stdev=123.80, samples=20 01:13:57.636 lat (msec) : 2=0.68%, 4=1.66%, 10=2.38%, 20=4.94%, 50=15.83% 01:13:57.636 lat (msec) : 100=62.68%, 250=11.83% 01:13:57.636 cpu : usr=34.49%, sys=0.94%, ctx=932, majf=0, minf=9 01:13:57.636 IO depths : 1=1.9%, 2=4.1%, 4=12.9%, 8=69.8%, 16=11.4%, 32=0.0%, >=64=0.0% 01:13:57.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 issued rwts: total=2350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.636 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.636 filename0: (groupid=0, jobs=1): err= 0: pid=107839: Mon Dec 9 06:12:50 2024 01:13:57.636 read: IOPS=193, BW=774KiB/s (793kB/s)(7752KiB/10013msec) 01:13:57.636 slat (usec): min=3, max=8046, avg=31.24, stdev=271.87 01:13:57.636 clat (msec): min=31, max=161, avg=82.38, stdev=23.00 01:13:57.636 lat (msec): min=31, max=161, avg=82.41, stdev=23.00 01:13:57.636 clat percentiles (msec): 01:13:57.636 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 63], 20.00th=[ 68], 01:13:57.636 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 01:13:57.636 | 70.00th=[ 91], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 128], 01:13:57.636 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 163], 99.95th=[ 163], 01:13:57.636 | 99.99th=[ 163] 01:13:57.636 bw ( KiB/s): min= 512, max= 1136, per=3.77%, avg=775.58, stdev=135.25, samples=19 01:13:57.636 iops : min= 128, max= 284, avg=193.89, stdev=33.81, samples=19 01:13:57.636 lat (msec) : 50=6.91%, 100=70.18%, 250=22.91% 01:13:57.636 cpu : usr=44.90%, sys=1.32%, ctx=1356, majf=0, minf=9 01:13:57.636 IO depths : 1=3.7%, 2=8.0%, 4=19.7%, 8=59.5%, 16=9.0%, 32=0.0%, >=64=0.0% 01:13:57.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.636 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.636 filename0: (groupid=0, jobs=1): err= 0: pid=107840: Mon Dec 9 06:12:50 2024 01:13:57.636 read: IOPS=236, BW=945KiB/s (968kB/s)(9528KiB/10078msec) 01:13:57.636 slat (usec): min=3, max=4044, avg=13.78, stdev=82.77 01:13:57.636 clat (msec): min=7, max=164, avg=67.56, stdev=25.77 01:13:57.636 lat (msec): min=7, max=164, avg=67.57, stdev=25.77 01:13:57.636 clat percentiles (msec): 01:13:57.636 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 01:13:57.636 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 71], 01:13:57.636 | 70.00th=[ 78], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 116], 01:13:57.636 | 99.00th=[ 142], 99.50th=[ 157], 99.90th=[ 165], 99.95th=[ 165], 01:13:57.636 | 99.99th=[ 165] 01:13:57.636 bw ( KiB/s): min= 608, max= 2048, per=4.60%, avg=946.40, stdev=305.23, samples=20 01:13:57.636 iops : min= 152, max= 512, avg=236.60, stdev=76.31, samples=20 01:13:57.636 lat (msec) : 10=1.55%, 20=2.56%, 50=18.93%, 100=66.75%, 250=10.20% 01:13:57.636 cpu : usr=39.99%, sys=1.33%, ctx=1333, majf=0, minf=9 01:13:57.636 IO depths : 1=1.2%, 2=3.1%, 4=11.4%, 8=72.0%, 16=12.3%, 32=0.0%, >=64=0.0% 01:13:57.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 issued rwts: total=2382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.636 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.636 filename0: (groupid=0, jobs=1): err= 0: pid=107841: Mon Dec 9 06:12:50 2024 01:13:57.636 read: IOPS=205, BW=821KiB/s (841kB/s)(8232KiB/10023msec) 01:13:57.636 slat (usec): min=3, max=8024, avg=17.49, stdev=197.72 01:13:57.636 clat (msec): min=21, max=167, avg=77.77, stdev=23.40 01:13:57.636 lat (msec): min=21, max=167, avg=77.78, stdev=23.41 01:13:57.636 clat percentiles (msec): 01:13:57.636 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 59], 01:13:57.636 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 01:13:57.636 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 01:13:57.636 | 99.00th=[ 131], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 01:13:57.636 | 99.99th=[ 169] 01:13:57.636 bw ( KiB/s): min= 640, max= 1200, per=4.02%, avg=828.21, stdev=148.79, samples=19 01:13:57.636 iops : min= 160, max= 300, avg=207.05, stdev=37.20, samples=19 01:13:57.636 lat (msec) : 50=14.04%, 100=70.75%, 250=15.21% 01:13:57.636 cpu : usr=36.63%, sys=1.22%, ctx=1155, majf=0, minf=9 01:13:57.636 IO depths : 1=0.9%, 2=2.5%, 4=10.1%, 8=74.0%, 16=12.5%, 32=0.0%, >=64=0.0% 01:13:57.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.636 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.636 filename0: (groupid=0, jobs=1): err= 0: pid=107842: Mon Dec 9 06:12:50 2024 01:13:57.636 read: IOPS=228, BW=912KiB/s (934kB/s)(9184KiB/10069msec) 01:13:57.636 slat (nsec): min=4529, max=90194, avg=17506.47, stdev=11510.06 01:13:57.636 clat (msec): min=6, max=159, avg=69.90, stdev=25.79 01:13:57.636 lat (msec): min=6, max=159, avg=69.92, stdev=25.79 01:13:57.636 clat percentiles (msec): 01:13:57.636 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 52], 01:13:57.636 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 77], 01:13:57.636 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 104], 95.00th=[ 111], 01:13:57.636 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 161], 99.95th=[ 161], 01:13:57.636 | 99.99th=[ 161] 01:13:57.636 bw ( KiB/s): min= 560, max= 2148, per=4.43%, avg=912.20, stdev=315.39, samples=20 01:13:57.636 iops : min= 140, max= 537, avg=228.05, stdev=78.85, samples=20 01:13:57.636 lat (msec) : 10=0.70%, 20=5.14%, 50=13.68%, 100=69.12%, 250=11.37% 01:13:57.636 cpu : usr=46.85%, sys=1.07%, ctx=1371, majf=0, minf=9 01:13:57.636 IO depths : 1=1.0%, 2=2.2%, 4=9.0%, 8=75.0%, 16=12.9%, 32=0.0%, >=64=0.0% 01:13:57.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 issued rwts: total=2296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.636 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.636 filename0: (groupid=0, jobs=1): err= 0: pid=107843: Mon Dec 9 06:12:50 2024 01:13:57.636 read: IOPS=197, BW=790KiB/s (809kB/s)(7920KiB/10030msec) 01:13:57.636 slat (usec): min=4, max=8020, avg=15.28, stdev=180.04 01:13:57.636 clat (msec): min=30, max=158, avg=80.93, stdev=23.05 01:13:57.636 lat (msec): min=30, max=158, avg=80.94, stdev=23.05 01:13:57.636 clat percentiles (msec): 01:13:57.636 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 67], 01:13:57.636 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 01:13:57.636 | 70.00th=[ 90], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 125], 01:13:57.636 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 01:13:57.636 | 99.99th=[ 159] 01:13:57.636 bw ( KiB/s): min= 596, max= 1208, per=3.82%, avg=785.40, stdev=146.49, samples=20 01:13:57.636 iops : min= 149, max= 302, avg=196.35, stdev=36.62, samples=20 01:13:57.636 lat (msec) : 50=10.51%, 100=71.97%, 250=17.53% 01:13:57.636 cpu : usr=34.99%, sys=1.03%, ctx=960, majf=0, minf=9 01:13:57.636 IO depths : 1=1.6%, 2=3.6%, 4=11.9%, 8=70.9%, 16=11.9%, 32=0.0%, >=64=0.0% 01:13:57.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 complete : 0=0.0%, 4=90.5%, 8=4.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.636 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.636 filename0: (groupid=0, jobs=1): err= 0: pid=107844: Mon Dec 9 06:12:50 2024 01:13:57.636 read: IOPS=203, BW=813KiB/s (833kB/s)(8152KiB/10026msec) 01:13:57.636 slat (usec): min=4, max=8025, avg=16.30, stdev=177.63 01:13:57.636 clat (msec): min=20, max=179, avg=78.54, stdev=24.95 01:13:57.636 lat (msec): min=20, max=179, avg=78.56, stdev=24.95 01:13:57.636 clat percentiles (msec): 01:13:57.636 | 1.00th=[ 24], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 61], 01:13:57.636 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 01:13:57.636 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 124], 01:13:57.636 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 01:13:57.636 | 99.99th=[ 180] 01:13:57.636 bw ( KiB/s): min= 625, max= 1282, per=3.93%, avg=808.95, stdev=164.30, samples=20 01:13:57.636 iops : min= 156, max= 320, avg=202.20, stdev=41.01, samples=20 01:13:57.636 lat (msec) : 50=11.58%, 100=72.96%, 250=15.46% 01:13:57.636 cpu : usr=36.15%, sys=1.05%, ctx=1069, majf=0, minf=9 01:13:57.636 IO depths : 1=1.6%, 2=3.7%, 4=11.6%, 8=71.1%, 16=12.1%, 32=0.0%, >=64=0.0% 01:13:57.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 complete : 0=0.0%, 4=90.8%, 8=4.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.636 issued rwts: total=2038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.636 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.636 filename1: (groupid=0, jobs=1): err= 0: pid=107845: Mon Dec 9 06:12:50 2024 01:13:57.636 read: IOPS=236, BW=946KiB/s (969kB/s)(9544KiB/10085msec) 01:13:57.636 slat (usec): min=5, max=8017, avg=15.53, stdev=163.97 01:13:57.636 clat (msec): min=9, max=167, avg=67.43, stdev=25.56 01:13:57.636 lat (msec): min=9, max=167, avg=67.44, stdev=25.57 01:13:57.636 clat percentiles (msec): 01:13:57.636 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 40], 20.00th=[ 48], 01:13:57.636 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 01:13:57.636 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 103], 95.00th=[ 114], 01:13:57.636 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 01:13:57.636 | 99.99th=[ 169] 01:13:57.636 bw ( KiB/s): min= 568, max= 1768, per=4.61%, avg=948.00, stdev=253.32, samples=20 01:13:57.636 iops : min= 142, max= 442, avg=237.00, stdev=63.33, samples=20 01:13:57.636 lat (msec) : 10=0.67%, 20=2.01%, 50=26.78%, 100=60.44%, 250=10.10% 01:13:57.636 cpu : usr=35.58%, sys=1.20%, ctx=990, majf=0, minf=9 01:13:57.636 IO depths : 1=0.9%, 2=1.8%, 4=7.9%, 8=76.6%, 16=12.8%, 32=0.0%, >=64=0.0% 01:13:57.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 issued rwts: total=2386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.637 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.637 filename1: (groupid=0, jobs=1): err= 0: pid=107846: Mon Dec 9 06:12:50 2024 01:13:57.637 read: IOPS=207, BW=830KiB/s (850kB/s)(8328KiB/10033msec) 01:13:57.637 slat (usec): min=3, max=8024, avg=24.74, stdev=233.64 01:13:57.637 clat (msec): min=32, max=202, avg=76.92, stdev=23.57 01:13:57.637 lat (msec): min=32, max=202, avg=76.94, stdev=23.57 01:13:57.637 clat percentiles (msec): 01:13:57.637 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 01:13:57.637 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 01:13:57.637 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 121], 01:13:57.637 | 99.00th=[ 146], 99.50th=[ 167], 99.90th=[ 203], 99.95th=[ 203], 01:13:57.637 | 99.99th=[ 203] 01:13:57.637 bw ( KiB/s): min= 512, max= 1152, per=4.01%, avg=826.40, stdev=159.45, samples=20 01:13:57.637 iops : min= 128, max= 288, avg=206.60, stdev=39.86, samples=20 01:13:57.637 lat (msec) : 50=13.30%, 100=72.33%, 250=14.36% 01:13:57.637 cpu : usr=35.52%, sys=1.05%, ctx=1042, majf=0, minf=9 01:13:57.637 IO depths : 1=2.2%, 2=4.9%, 4=13.7%, 8=68.2%, 16=11.1%, 32=0.0%, >=64=0.0% 01:13:57.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.637 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.637 filename1: (groupid=0, jobs=1): err= 0: pid=107847: Mon Dec 9 06:12:50 2024 01:13:57.637 read: IOPS=237, BW=951KiB/s (974kB/s)(9592KiB/10085msec) 01:13:57.637 slat (usec): min=4, max=8023, avg=18.52, stdev=200.42 01:13:57.637 clat (msec): min=7, max=167, avg=67.11, stdev=25.72 01:13:57.637 lat (msec): min=7, max=167, avg=67.12, stdev=25.71 01:13:57.637 clat percentiles (msec): 01:13:57.637 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 36], 20.00th=[ 48], 01:13:57.637 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 01:13:57.637 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 109], 01:13:57.637 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 01:13:57.637 | 99.99th=[ 169] 01:13:57.637 bw ( KiB/s): min= 640, max= 2136, per=4.63%, avg=952.80, stdev=303.68, samples=20 01:13:57.637 iops : min= 160, max= 534, avg=238.20, stdev=75.92, samples=20 01:13:57.637 lat (msec) : 10=1.58%, 20=3.09%, 50=23.19%, 100=64.10%, 250=8.05% 01:13:57.637 cpu : usr=33.04%, sys=1.02%, ctx=899, majf=0, minf=9 01:13:57.637 IO depths : 1=1.2%, 2=2.8%, 4=9.5%, 8=73.9%, 16=12.7%, 32=0.0%, >=64=0.0% 01:13:57.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 complete : 0=0.0%, 4=90.1%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.637 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.637 filename1: (groupid=0, jobs=1): err= 0: pid=107848: Mon Dec 9 06:12:50 2024 01:13:57.637 read: IOPS=200, BW=802KiB/s (822kB/s)(8068KiB/10055msec) 01:13:57.637 slat (usec): min=4, max=8056, avg=33.60, stdev=357.37 01:13:57.637 clat (msec): min=30, max=171, avg=79.56, stdev=25.38 01:13:57.637 lat (msec): min=30, max=171, avg=79.60, stdev=25.38 01:13:57.637 clat percentiles (msec): 01:13:57.637 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 01:13:57.637 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 83], 01:13:57.637 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 117], 95.00th=[ 124], 01:13:57.637 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 171], 99.95th=[ 171], 01:13:57.637 | 99.99th=[ 171] 01:13:57.637 bw ( KiB/s): min= 512, max= 1232, per=3.89%, avg=800.40, stdev=159.74, samples=20 01:13:57.637 iops : min= 128, max= 308, avg=200.10, stdev=39.93, samples=20 01:13:57.637 lat (msec) : 50=12.84%, 100=67.67%, 250=19.48% 01:13:57.637 cpu : usr=37.49%, sys=1.34%, ctx=840, majf=0, minf=9 01:13:57.637 IO depths : 1=2.1%, 2=4.8%, 4=15.0%, 8=67.2%, 16=10.9%, 32=0.0%, >=64=0.0% 01:13:57.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.637 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.637 filename1: (groupid=0, jobs=1): err= 0: pid=107849: Mon Dec 9 06:12:50 2024 01:13:57.637 read: IOPS=212, BW=851KiB/s (871kB/s)(8556KiB/10058msec) 01:13:57.637 slat (usec): min=4, max=9047, avg=35.38, stdev=360.60 01:13:57.637 clat (msec): min=18, max=166, avg=74.88, stdev=23.98 01:13:57.637 lat (msec): min=20, max=166, avg=74.91, stdev=23.98 01:13:57.637 clat percentiles (msec): 01:13:57.637 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 01:13:57.637 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 01:13:57.637 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 123], 01:13:57.637 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 01:13:57.637 | 99.99th=[ 167] 01:13:57.637 bw ( KiB/s): min= 512, max= 1408, per=4.13%, avg=849.20, stdev=170.66, samples=20 01:13:57.637 iops : min= 128, max= 352, avg=212.30, stdev=42.67, samples=20 01:13:57.637 lat (msec) : 20=0.05%, 50=18.14%, 100=67.65%, 250=14.17% 01:13:57.637 cpu : usr=32.23%, sys=0.87%, ctx=949, majf=0, minf=9 01:13:57.637 IO depths : 1=1.6%, 2=3.7%, 4=12.9%, 8=70.3%, 16=11.5%, 32=0.0%, >=64=0.0% 01:13:57.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.637 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.637 filename1: (groupid=0, jobs=1): err= 0: pid=107850: Mon Dec 9 06:12:50 2024 01:13:57.637 read: IOPS=216, BW=866KiB/s (886kB/s)(8684KiB/10032msec) 01:13:57.637 slat (usec): min=7, max=4054, avg=23.90, stdev=150.23 01:13:57.637 clat (msec): min=16, max=161, avg=73.78, stdev=23.94 01:13:57.637 lat (msec): min=16, max=161, avg=73.81, stdev=23.93 01:13:57.637 clat percentiles (msec): 01:13:57.637 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 44], 20.00th=[ 55], 01:13:57.637 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 01:13:57.637 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 118], 01:13:57.637 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 163], 99.95th=[ 163], 01:13:57.637 | 99.99th=[ 163] 01:13:57.637 bw ( KiB/s): min= 640, max= 1496, per=4.19%, avg=862.00, stdev=179.24, samples=20 01:13:57.637 iops : min= 160, max= 374, avg=215.50, stdev=44.81, samples=20 01:13:57.637 lat (msec) : 20=0.74%, 50=14.33%, 100=71.90%, 250=13.04% 01:13:57.637 cpu : usr=43.07%, sys=1.27%, ctx=1268, majf=0, minf=9 01:13:57.637 IO depths : 1=2.2%, 2=4.7%, 4=13.3%, 8=68.5%, 16=11.3%, 32=0.0%, >=64=0.0% 01:13:57.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 complete : 0=0.0%, 4=90.9%, 8=4.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 issued rwts: total=2171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.637 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.637 filename1: (groupid=0, jobs=1): err= 0: pid=107851: Mon Dec 9 06:12:50 2024 01:13:57.637 read: IOPS=209, BW=837KiB/s (857kB/s)(8408KiB/10051msec) 01:13:57.637 slat (usec): min=3, max=8045, avg=38.95, stdev=262.76 01:13:57.637 clat (msec): min=23, max=150, avg=76.20, stdev=23.61 01:13:57.637 lat (msec): min=23, max=150, avg=76.24, stdev=23.62 01:13:57.637 clat percentiles (msec): 01:13:57.637 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 01:13:57.637 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 01:13:57.637 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 121], 01:13:57.637 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 01:13:57.637 | 99.99th=[ 150] 01:13:57.637 bw ( KiB/s): min= 584, max= 1269, per=4.05%, avg=833.85, stdev=161.10, samples=20 01:13:57.637 iops : min= 146, max= 317, avg=208.45, stdev=40.24, samples=20 01:13:57.637 lat (msec) : 50=14.37%, 100=69.12%, 250=16.51% 01:13:57.637 cpu : usr=41.98%, sys=1.12%, ctx=1280, majf=0, minf=9 01:13:57.637 IO depths : 1=2.6%, 2=5.8%, 4=15.5%, 8=66.0%, 16=10.1%, 32=0.0%, >=64=0.0% 01:13:57.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.637 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.637 filename1: (groupid=0, jobs=1): err= 0: pid=107852: Mon Dec 9 06:12:50 2024 01:13:57.637 read: IOPS=205, BW=821KiB/s (841kB/s)(8240KiB/10037msec) 01:13:57.637 slat (usec): min=4, max=8064, avg=53.46, stdev=466.41 01:13:57.637 clat (msec): min=24, max=172, avg=77.52, stdev=25.17 01:13:57.637 lat (msec): min=24, max=172, avg=77.57, stdev=25.18 01:13:57.637 clat percentiles (msec): 01:13:57.637 | 1.00th=[ 31], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 57], 01:13:57.637 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 81], 01:13:57.637 | 70.00th=[ 89], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 123], 01:13:57.637 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 174], 99.95th=[ 174], 01:13:57.637 | 99.99th=[ 174] 01:13:57.637 bw ( KiB/s): min= 512, max= 1408, per=3.97%, avg=817.60, stdev=198.62, samples=20 01:13:57.637 iops : min= 128, max= 352, avg=204.40, stdev=49.65, samples=20 01:13:57.637 lat (msec) : 50=13.98%, 100=69.47%, 250=16.55% 01:13:57.637 cpu : usr=43.68%, sys=1.26%, ctx=1257, majf=0, minf=9 01:13:57.637 IO depths : 1=2.7%, 2=6.0%, 4=16.3%, 8=64.8%, 16=10.2%, 32=0.0%, >=64=0.0% 01:13:57.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.637 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.637 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.637 filename2: (groupid=0, jobs=1): err= 0: pid=107853: Mon Dec 9 06:12:50 2024 01:13:57.637 read: IOPS=215, BW=863KiB/s (883kB/s)(8652KiB/10031msec) 01:13:57.637 slat (usec): min=4, max=4051, avg=23.25, stdev=123.23 01:13:57.637 clat (msec): min=24, max=189, avg=74.05, stdev=25.12 01:13:57.637 lat (msec): min=24, max=189, avg=74.07, stdev=25.13 01:13:57.637 clat percentiles (msec): 01:13:57.637 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 54], 01:13:57.637 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 01:13:57.637 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 112], 95.00th=[ 123], 01:13:57.638 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 190], 99.95th=[ 190], 01:13:57.638 | 99.99th=[ 190] 01:13:57.638 bw ( KiB/s): min= 592, max= 1386, per=4.17%, avg=858.90, stdev=195.87, samples=20 01:13:57.638 iops : min= 148, max= 346, avg=214.70, stdev=48.90, samples=20 01:13:57.638 lat (msec) : 50=16.83%, 100=70.13%, 250=13.04% 01:13:57.638 cpu : usr=39.98%, sys=0.95%, ctx=1289, majf=0, minf=9 01:13:57.638 IO depths : 1=1.9%, 2=4.3%, 4=13.1%, 8=69.4%, 16=11.3%, 32=0.0%, >=64=0.0% 01:13:57.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.638 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.638 filename2: (groupid=0, jobs=1): err= 0: pid=107854: Mon Dec 9 06:12:50 2024 01:13:57.638 read: IOPS=198, BW=796KiB/s (815kB/s)(7984KiB/10031msec) 01:13:57.638 slat (usec): min=7, max=8070, avg=56.37, stdev=485.54 01:13:57.638 clat (msec): min=27, max=155, avg=79.98, stdev=25.08 01:13:57.638 lat (msec): min=27, max=156, avg=80.03, stdev=25.08 01:13:57.638 clat percentiles (msec): 01:13:57.638 | 1.00th=[ 31], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 62], 01:13:57.638 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 01:13:57.638 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 117], 95.00th=[ 128], 01:13:57.638 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 01:13:57.638 | 99.99th=[ 157] 01:13:57.638 bw ( KiB/s): min= 640, max= 1282, per=3.85%, avg=792.10, stdev=160.80, samples=20 01:13:57.638 iops : min= 160, max= 320, avg=198.00, stdev=40.12, samples=20 01:13:57.638 lat (msec) : 50=11.07%, 100=69.69%, 250=19.24% 01:13:57.638 cpu : usr=37.37%, sys=0.94%, ctx=1166, majf=0, minf=9 01:13:57.638 IO depths : 1=2.6%, 2=5.5%, 4=15.0%, 8=66.4%, 16=10.5%, 32=0.0%, >=64=0.0% 01:13:57.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.638 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.638 filename2: (groupid=0, jobs=1): err= 0: pid=107855: Mon Dec 9 06:12:50 2024 01:13:57.638 read: IOPS=214, BW=858KiB/s (878kB/s)(8600KiB/10028msec) 01:13:57.638 slat (usec): min=4, max=8026, avg=19.09, stdev=244.38 01:13:57.638 clat (msec): min=23, max=157, avg=74.50, stdev=24.07 01:13:57.638 lat (msec): min=23, max=157, avg=74.51, stdev=24.06 01:13:57.638 clat percentiles (msec): 01:13:57.638 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 51], 01:13:57.638 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 79], 01:13:57.638 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 01:13:57.638 | 99.00th=[ 136], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 01:13:57.638 | 99.99th=[ 159] 01:13:57.638 bw ( KiB/s): min= 656, max= 1296, per=4.15%, avg=853.50, stdev=165.44, samples=20 01:13:57.638 iops : min= 164, max= 324, avg=213.35, stdev=41.37, samples=20 01:13:57.638 lat (msec) : 50=20.09%, 100=67.81%, 250=12.09% 01:13:57.638 cpu : usr=33.20%, sys=1.06%, ctx=893, majf=0, minf=10 01:13:57.638 IO depths : 1=0.9%, 2=2.3%, 4=9.3%, 8=74.8%, 16=12.7%, 32=0.0%, >=64=0.0% 01:13:57.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 issued rwts: total=2150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.638 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.638 filename2: (groupid=0, jobs=1): err= 0: pid=107856: Mon Dec 9 06:12:50 2024 01:13:57.638 read: IOPS=222, BW=889KiB/s (910kB/s)(8956KiB/10079msec) 01:13:57.638 slat (usec): min=4, max=8037, avg=20.48, stdev=254.10 01:13:57.638 clat (msec): min=8, max=164, avg=71.70, stdev=24.34 01:13:57.638 lat (msec): min=8, max=164, avg=71.72, stdev=24.34 01:13:57.638 clat percentiles (msec): 01:13:57.638 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 53], 01:13:57.638 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 01:13:57.638 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 111], 01:13:57.638 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 165], 99.95th=[ 165], 01:13:57.638 | 99.99th=[ 165] 01:13:57.638 bw ( KiB/s): min= 688, max= 1725, per=4.33%, avg=891.05, stdev=224.68, samples=20 01:13:57.638 iops : min= 172, max= 431, avg=222.75, stdev=56.12, samples=20 01:13:57.638 lat (msec) : 10=0.71%, 20=1.61%, 50=16.30%, 100=69.36%, 250=12.01% 01:13:57.638 cpu : usr=34.32%, sys=1.12%, ctx=1023, majf=0, minf=9 01:13:57.638 IO depths : 1=1.5%, 2=4.2%, 4=14.1%, 8=68.4%, 16=11.8%, 32=0.0%, >=64=0.0% 01:13:57.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 complete : 0=0.0%, 4=91.3%, 8=3.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.638 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.638 filename2: (groupid=0, jobs=1): err= 0: pid=107857: Mon Dec 9 06:12:50 2024 01:13:57.638 read: IOPS=228, BW=915KiB/s (937kB/s)(9224KiB/10080msec) 01:13:57.638 slat (usec): min=3, max=8028, avg=20.42, stdev=193.73 01:13:57.638 clat (msec): min=14, max=132, avg=69.66, stdev=23.35 01:13:57.638 lat (msec): min=14, max=132, avg=69.68, stdev=23.36 01:13:57.638 clat percentiles (msec): 01:13:57.638 | 1.00th=[ 22], 5.00th=[ 34], 10.00th=[ 44], 20.00th=[ 48], 01:13:57.638 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 01:13:57.638 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 112], 01:13:57.638 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 01:13:57.638 | 99.99th=[ 132] 01:13:57.638 bw ( KiB/s): min= 640, max= 1571, per=4.45%, avg=915.35, stdev=204.57, samples=20 01:13:57.638 iops : min= 160, max= 392, avg=228.80, stdev=51.02, samples=20 01:13:57.638 lat (msec) : 20=0.69%, 50=24.24%, 100=64.35%, 250=10.71% 01:13:57.638 cpu : usr=35.72%, sys=1.08%, ctx=1059, majf=0, minf=9 01:13:57.638 IO depths : 1=1.5%, 2=3.6%, 4=11.9%, 8=71.3%, 16=11.8%, 32=0.0%, >=64=0.0% 01:13:57.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 issued rwts: total=2306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.638 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.638 filename2: (groupid=0, jobs=1): err= 0: pid=107858: Mon Dec 9 06:12:50 2024 01:13:57.638 read: IOPS=194, BW=779KiB/s (798kB/s)(7820KiB/10036msec) 01:13:57.638 slat (usec): min=8, max=8059, avg=46.58, stdev=331.47 01:13:57.638 clat (msec): min=25, max=178, avg=81.83, stdev=24.95 01:13:57.638 lat (msec): min=25, max=178, avg=81.88, stdev=24.97 01:13:57.638 clat percentiles (msec): 01:13:57.638 | 1.00th=[ 31], 5.00th=[ 43], 10.00th=[ 49], 20.00th=[ 66], 01:13:57.638 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 85], 01:13:57.638 | 70.00th=[ 95], 80.00th=[ 99], 90.00th=[ 117], 95.00th=[ 132], 01:13:57.638 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 178], 01:13:57.638 | 99.99th=[ 178] 01:13:57.638 bw ( KiB/s): min= 640, max= 1282, per=3.77%, avg=775.70, stdev=153.76, samples=20 01:13:57.638 iops : min= 160, max= 320, avg=193.90, stdev=38.35, samples=20 01:13:57.638 lat (msec) : 50=11.82%, 100=69.41%, 250=18.77% 01:13:57.638 cpu : usr=33.66%, sys=0.89%, ctx=971, majf=0, minf=9 01:13:57.638 IO depths : 1=2.0%, 2=5.4%, 4=17.4%, 8=64.3%, 16=10.8%, 32=0.0%, >=64=0.0% 01:13:57.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.638 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.638 filename2: (groupid=0, jobs=1): err= 0: pid=107859: Mon Dec 9 06:12:50 2024 01:13:57.638 read: IOPS=205, BW=824KiB/s (844kB/s)(8276KiB/10046msec) 01:13:57.638 slat (nsec): min=3990, max=50680, avg=12096.96, stdev=4483.09 01:13:57.638 clat (msec): min=23, max=214, avg=77.43, stdev=27.03 01:13:57.638 lat (msec): min=23, max=214, avg=77.44, stdev=27.02 01:13:57.638 clat percentiles (msec): 01:13:57.638 | 1.00th=[ 30], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 61], 01:13:57.638 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 01:13:57.638 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 132], 01:13:57.638 | 99.00th=[ 146], 99.50th=[ 169], 99.90th=[ 215], 99.95th=[ 215], 01:13:57.638 | 99.99th=[ 215] 01:13:57.638 bw ( KiB/s): min= 512, max= 1408, per=4.00%, avg=823.60, stdev=173.48, samples=20 01:13:57.638 iops : min= 128, max= 352, avg=205.90, stdev=43.37, samples=20 01:13:57.638 lat (msec) : 50=14.50%, 100=67.67%, 250=17.83% 01:13:57.638 cpu : usr=33.17%, sys=1.02%, ctx=909, majf=0, minf=9 01:13:57.638 IO depths : 1=2.5%, 2=5.9%, 4=16.1%, 8=65.1%, 16=10.3%, 32=0.0%, >=64=0.0% 01:13:57.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 issued rwts: total=2069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.638 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.638 filename2: (groupid=0, jobs=1): err= 0: pid=107860: Mon Dec 9 06:12:50 2024 01:13:57.638 read: IOPS=238, BW=955KiB/s (978kB/s)(9572KiB/10025msec) 01:13:57.638 slat (usec): min=4, max=8052, avg=21.68, stdev=172.03 01:13:57.638 clat (msec): min=30, max=154, avg=66.92, stdev=19.47 01:13:57.638 lat (msec): min=30, max=154, avg=66.94, stdev=19.48 01:13:57.638 clat percentiles (msec): 01:13:57.638 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 50], 01:13:57.638 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 72], 01:13:57.638 | 70.00th=[ 74], 80.00th=[ 83], 90.00th=[ 89], 95.00th=[ 97], 01:13:57.638 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 01:13:57.638 | 99.99th=[ 155] 01:13:57.638 bw ( KiB/s): min= 608, max= 1248, per=4.62%, avg=950.85, stdev=150.34, samples=20 01:13:57.638 iops : min= 152, max= 312, avg=237.70, stdev=37.59, samples=20 01:13:57.638 lat (msec) : 50=20.81%, 100=74.93%, 250=4.26% 01:13:57.638 cpu : usr=39.89%, sys=0.94%, ctx=1098, majf=0, minf=9 01:13:57.638 IO depths : 1=0.7%, 2=1.5%, 4=7.3%, 8=77.4%, 16=13.1%, 32=0.0%, >=64=0.0% 01:13:57.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 complete : 0=0.0%, 4=89.5%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 01:13:57.638 issued rwts: total=2393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:13:57.638 latency : target=0, window=0, percentile=100.00%, depth=16 01:13:57.638 01:13:57.638 Run status group 0 (all jobs): 01:13:57.639 READ: bw=20.1MiB/s (21.1MB/s), 774KiB/s-955KiB/s (793kB/s-978kB/s), io=203MiB (212MB), run=10013-10085msec 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 bdev_null0 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 [2024-12-09 06:12:50.427996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 bdev_null1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:57.639 { 01:13:57.639 "params": { 01:13:57.639 "name": "Nvme$subsystem", 01:13:57.639 "trtype": "$TEST_TRANSPORT", 01:13:57.639 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:57.639 "adrfam": "ipv4", 01:13:57.639 "trsvcid": "$NVMF_PORT", 01:13:57.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:57.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:57.639 "hdgst": ${hdgst:-false}, 01:13:57.639 "ddgst": ${ddgst:-false} 01:13:57.639 }, 01:13:57.639 "method": "bdev_nvme_attach_controller" 01:13:57.639 } 01:13:57.639 EOF 01:13:57.639 )") 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:13:57.639 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:13:57.640 { 01:13:57.640 "params": { 01:13:57.640 "name": "Nvme$subsystem", 01:13:57.640 "trtype": "$TEST_TRANSPORT", 01:13:57.640 "traddr": "$NVMF_FIRST_TARGET_IP", 01:13:57.640 "adrfam": "ipv4", 01:13:57.640 "trsvcid": "$NVMF_PORT", 01:13:57.640 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:13:57.640 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:13:57.640 "hdgst": ${hdgst:-false}, 01:13:57.640 "ddgst": ${ddgst:-false} 01:13:57.640 }, 01:13:57.640 "method": "bdev_nvme_attach_controller" 01:13:57.640 } 01:13:57.640 EOF 01:13:57.640 )") 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:13:57.640 "params": { 01:13:57.640 "name": "Nvme0", 01:13:57.640 "trtype": "tcp", 01:13:57.640 "traddr": "10.0.0.3", 01:13:57.640 "adrfam": "ipv4", 01:13:57.640 "trsvcid": "4420", 01:13:57.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:13:57.640 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:13:57.640 "hdgst": false, 01:13:57.640 "ddgst": false 01:13:57.640 }, 01:13:57.640 "method": "bdev_nvme_attach_controller" 01:13:57.640 },{ 01:13:57.640 "params": { 01:13:57.640 "name": "Nvme1", 01:13:57.640 "trtype": "tcp", 01:13:57.640 "traddr": "10.0.0.3", 01:13:57.640 "adrfam": "ipv4", 01:13:57.640 "trsvcid": "4420", 01:13:57.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:13:57.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:13:57.640 "hdgst": false, 01:13:57.640 "ddgst": false 01:13:57.640 }, 01:13:57.640 "method": "bdev_nvme_attach_controller" 01:13:57.640 }' 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:13:57.640 06:12:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:13:57.640 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:13:57.640 ... 01:13:57.640 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:13:57.640 ... 01:13:57.640 fio-3.35 01:13:57.640 Starting 4 threads 01:14:01.827 01:14:01.827 filename0: (groupid=0, jobs=1): err= 0: pid=107981: Mon Dec 9 06:12:56 2024 01:14:01.827 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5003msec) 01:14:01.827 slat (usec): min=6, max=279, avg=16.26, stdev= 6.70 01:14:01.827 clat (usec): min=2246, max=9800, avg=4223.85, stdev=413.10 01:14:01.827 lat (usec): min=2258, max=9819, avg=4240.11, stdev=413.13 01:14:01.827 clat percentiles (usec): 01:14:01.827 | 1.00th=[ 3785], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4047], 01:14:01.827 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 01:14:01.827 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 5342], 01:14:01.827 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 8291], 99.95th=[ 9765], 01:14:01.827 | 99.99th=[ 9765] 01:14:01.827 bw ( KiB/s): min=14364, max=15232, per=24.99%, avg=14870.67, stdev=352.45, samples=9 01:14:01.827 iops : min= 1795, max= 1904, avg=1858.78, stdev=44.15, samples=9 01:14:01.827 lat (msec) : 4=9.97%, 10=90.03% 01:14:01.827 cpu : usr=92.98%, sys=5.42%, ctx=110, majf=0, minf=0 01:14:01.827 IO depths : 1=12.0%, 2=25.0%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:01.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:01.827 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:01.827 issued rwts: total=9299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:01.827 latency : target=0, window=0, percentile=100.00%, depth=8 01:14:01.827 filename0: (groupid=0, jobs=1): err= 0: pid=107982: Mon Dec 9 06:12:56 2024 01:14:01.827 read: IOPS=1859, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5003msec) 01:14:01.827 slat (nsec): min=5122, max=59412, avg=9487.51, stdev=3519.36 01:14:01.827 clat (usec): min=3346, max=9974, avg=4251.81, stdev=390.71 01:14:01.827 lat (usec): min=3355, max=9981, avg=4261.30, stdev=390.92 01:14:01.827 clat percentiles (usec): 01:14:01.827 | 1.00th=[ 3851], 5.00th=[ 3949], 10.00th=[ 4047], 20.00th=[ 4113], 01:14:01.827 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4146], 60.00th=[ 4178], 01:14:01.827 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 5276], 01:14:01.827 | 99.00th=[ 5669], 99.50th=[ 5735], 99.90th=[ 8717], 99.95th=[ 9503], 01:14:01.827 | 99.99th=[10028] 01:14:01.827 bw ( KiB/s): min=14336, max=15232, per=25.00%, avg=14876.44, stdev=366.41, samples=9 01:14:01.827 iops : min= 1792, max= 1904, avg=1859.56, stdev=45.80, samples=9 01:14:01.827 lat (msec) : 4=7.70%, 10=92.30% 01:14:01.827 cpu : usr=94.28%, sys=4.52%, ctx=81, majf=0, minf=0 01:14:01.827 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:01.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:01.827 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:01.827 issued rwts: total=9304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:01.827 latency : target=0, window=0, percentile=100.00%, depth=8 01:14:01.827 filename1: (groupid=0, jobs=1): err= 0: pid=107983: Mon Dec 9 06:12:56 2024 01:14:01.827 read: IOPS=1859, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5002msec) 01:14:01.827 slat (nsec): min=7065, max=57875, avg=16225.27, stdev=5715.85 01:14:01.827 clat (usec): min=2305, max=9837, avg=4223.76, stdev=418.85 01:14:01.827 lat (usec): min=2316, max=9851, avg=4239.99, stdev=418.51 01:14:01.827 clat percentiles (usec): 01:14:01.827 | 1.00th=[ 3818], 5.00th=[ 3916], 10.00th=[ 3982], 20.00th=[ 4047], 01:14:01.827 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 01:14:01.827 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 5342], 01:14:01.827 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 8225], 99.95th=[ 9765], 01:14:01.827 | 99.99th=[ 9896] 01:14:01.827 bw ( KiB/s): min=14256, max=15360, per=25.01%, avg=14881.78, stdev=423.96, samples=9 01:14:01.827 iops : min= 1782, max= 1920, avg=1860.22, stdev=52.99, samples=9 01:14:01.827 lat (msec) : 4=10.79%, 10=89.21% 01:14:01.827 cpu : usr=94.50%, sys=4.42%, ctx=11, majf=0, minf=0 01:14:01.827 IO depths : 1=11.9%, 2=24.8%, 4=50.2%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:01.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:01.827 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:01.827 issued rwts: total=9299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:01.827 latency : target=0, window=0, percentile=100.00%, depth=8 01:14:01.827 filename1: (groupid=0, jobs=1): err= 0: pid=107984: Mon Dec 9 06:12:56 2024 01:14:01.827 read: IOPS=1860, BW=14.5MiB/s (15.2MB/s)(72.8MiB/5004msec) 01:14:01.827 slat (nsec): min=7296, max=63006, avg=14548.50, stdev=5808.45 01:14:01.827 clat (usec): min=3058, max=9855, avg=4232.48, stdev=404.62 01:14:01.827 lat (usec): min=3070, max=9867, avg=4247.02, stdev=404.24 01:14:01.827 clat percentiles (usec): 01:14:01.827 | 1.00th=[ 3818], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4080], 01:14:01.827 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 01:14:01.827 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 5342], 01:14:01.827 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 8356], 99.95th=[ 9765], 01:14:01.827 | 99.99th=[ 9896] 01:14:01.827 bw ( KiB/s): min=14464, max=15232, per=25.03%, avg=14890.67, stdev=344.65, samples=9 01:14:01.827 iops : min= 1808, max= 1904, avg=1861.33, stdev=43.08, samples=9 01:14:01.827 lat (msec) : 4=9.09%, 10=90.91% 01:14:01.827 cpu : usr=94.42%, sys=4.38%, ctx=5, majf=0, minf=0 01:14:01.827 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:01.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:01.827 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:01.827 issued rwts: total=9312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:01.827 latency : target=0, window=0, percentile=100.00%, depth=8 01:14:01.827 01:14:01.827 Run status group 0 (all jobs): 01:14:01.827 READ: bw=58.1MiB/s (60.9MB/s), 14.5MiB/s-14.5MiB/s (15.2MB/s-15.2MB/s), io=291MiB (305MB), run=5002-5004msec 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 ************************************ 01:14:02.087 END TEST fio_dif_rand_params 01:14:02.087 ************************************ 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:02.087 01:14:02.087 real 0m23.595s 01:14:02.087 user 2m5.567s 01:14:02.087 sys 0m5.181s 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 06:12:56 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:14:02.087 06:12:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:02.087 06:12:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 ************************************ 01:14:02.087 START TEST fio_dif_digest 01:14:02.087 ************************************ 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 bdev_null0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:02.087 [2024-12-09 06:12:56.597586] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 01:14:02.087 06:12:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 01:14:02.087 { 01:14:02.087 "params": { 01:14:02.087 "name": "Nvme$subsystem", 01:14:02.087 "trtype": "$TEST_TRANSPORT", 01:14:02.087 "traddr": "$NVMF_FIRST_TARGET_IP", 01:14:02.087 "adrfam": "ipv4", 01:14:02.087 "trsvcid": "$NVMF_PORT", 01:14:02.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:14:02.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:14:02.088 "hdgst": ${hdgst:-false}, 01:14:02.088 "ddgst": ${ddgst:-false} 01:14:02.088 }, 01:14:02.088 "method": "bdev_nvme_attach_controller" 01:14:02.088 } 01:14:02.088 EOF 01:14:02.088 )") 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 01:14:02.088 "params": { 01:14:02.088 "name": "Nvme0", 01:14:02.088 "trtype": "tcp", 01:14:02.088 "traddr": "10.0.0.3", 01:14:02.088 "adrfam": "ipv4", 01:14:02.088 "trsvcid": "4420", 01:14:02.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:02.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:02.088 "hdgst": true, 01:14:02.088 "ddgst": true 01:14:02.088 }, 01:14:02.088 "method": "bdev_nvme_attach_controller" 01:14:02.088 }' 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 01:14:02.088 06:12:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:14:02.346 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:14:02.346 ... 01:14:02.346 fio-3.35 01:14:02.346 Starting 3 threads 01:14:14.550 01:14:14.550 filename0: (groupid=0, jobs=1): err= 0: pid=108092: Mon Dec 9 06:13:07 2024 01:14:14.550 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10006msec) 01:14:14.550 slat (nsec): min=7328, max=47725, avg=13103.28, stdev=4287.02 01:14:14.550 clat (usec): min=6428, max=19403, avg=15134.86, stdev=1399.66 01:14:14.550 lat (usec): min=6436, max=19412, avg=15147.97, stdev=1399.56 01:14:14.550 clat percentiles (usec): 01:14:14.550 | 1.00th=[ 9372], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 01:14:14.550 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 01:14:14.550 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 01:14:14.550 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19268], 99.95th=[19530], 01:14:14.550 | 99.99th=[19530] 01:14:14.550 bw ( KiB/s): min=24576, max=27392, per=32.97%, avg=25344.00, stdev=739.01, samples=19 01:14:14.550 iops : min= 192, max= 214, avg=198.00, stdev= 5.77, samples=19 01:14:14.550 lat (msec) : 10=1.26%, 20=98.74% 01:14:14.550 cpu : usr=92.82%, sys=5.75%, ctx=21, majf=0, minf=0 01:14:14.550 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:14.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:14.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:14.550 issued rwts: total=1981,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:14.550 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:14.550 filename0: (groupid=0, jobs=1): err= 0: pid=108093: Mon Dec 9 06:13:07 2024 01:14:14.550 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(293MiB/10008msec) 01:14:14.550 slat (nsec): min=4463, max=62489, avg=14188.17, stdev=5501.27 01:14:14.550 clat (usec): min=9321, max=54576, avg=12783.72, stdev=2282.86 01:14:14.550 lat (usec): min=9330, max=54589, avg=12797.90, stdev=2283.10 01:14:14.550 clat percentiles (usec): 01:14:14.550 | 1.00th=[10421], 5.00th=[11076], 10.00th=[11469], 20.00th=[11863], 01:14:14.550 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 01:14:14.550 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13960], 95.00th=[14222], 01:14:14.550 | 99.00th=[15139], 99.50th=[15401], 99.90th=[53740], 99.95th=[53740], 01:14:14.550 | 99.99th=[54789] 01:14:14.550 bw ( KiB/s): min=27904, max=31488, per=38.96%, avg=29952.00, stdev=853.33, samples=19 01:14:14.550 iops : min= 218, max= 246, avg=234.00, stdev= 6.67, samples=19 01:14:14.550 lat (msec) : 10=0.30%, 20=99.45%, 100=0.26% 01:14:14.550 cpu : usr=92.50%, sys=5.86%, ctx=13, majf=0, minf=0 01:14:14.550 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:14.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:14.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:14.550 issued rwts: total=2345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:14.550 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:14.550 filename0: (groupid=0, jobs=1): err= 0: pid=108094: Mon Dec 9 06:13:07 2024 01:14:14.550 read: IOPS=168, BW=21.0MiB/s (22.1MB/s)(211MiB/10006msec) 01:14:14.550 slat (nsec): min=7277, max=54270, avg=13622.05, stdev=4532.41 01:14:14.550 clat (usec): min=7917, max=23284, avg=17796.50, stdev=1298.61 01:14:14.550 lat (usec): min=7943, max=23312, avg=17810.12, stdev=1298.90 01:14:14.550 clat percentiles (usec): 01:14:14.550 | 1.00th=[11863], 5.00th=[16188], 10.00th=[16581], 20.00th=[16909], 01:14:14.550 | 30.00th=[17433], 40.00th=[17433], 50.00th=[17957], 60.00th=[17957], 01:14:14.550 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19268], 95.00th=[19530], 01:14:14.550 | 99.00th=[20579], 99.50th=[21365], 99.90th=[23200], 99.95th=[23200], 01:14:14.550 | 99.99th=[23200] 01:14:14.550 bw ( KiB/s): min=20480, max=22784, per=28.00%, avg=21530.95, stdev=584.36, samples=19 01:14:14.550 iops : min= 160, max= 178, avg=168.21, stdev= 4.57, samples=19 01:14:14.550 lat (msec) : 10=0.06%, 20=97.21%, 50=2.73% 01:14:14.550 cpu : usr=92.26%, sys=6.43%, ctx=7, majf=0, minf=0 01:14:14.550 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:14:14.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:14.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:14:14.550 issued rwts: total=1685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:14:14.550 latency : target=0, window=0, percentile=100.00%, depth=3 01:14:14.550 01:14:14.550 Run status group 0 (all jobs): 01:14:14.550 READ: bw=75.1MiB/s (78.7MB/s), 21.0MiB/s-29.3MiB/s (22.1MB/s-30.7MB/s), io=751MiB (788MB), run=10006-10008msec 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:14.550 01:14:14.550 real 0m10.943s 01:14:14.550 user 0m28.427s 01:14:14.550 sys 0m2.024s 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:14.550 ************************************ 01:14:14.550 END TEST fio_dif_digest 01:14:14.550 ************************************ 01:14:14.550 06:13:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:14:14.550 06:13:07 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:14:14.550 06:13:07 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@121 -- # sync 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@124 -- # set +e 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:14:14.550 rmmod nvme_tcp 01:14:14.550 rmmod nvme_fabrics 01:14:14.550 rmmod nvme_keyring 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@128 -- # set -e 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@129 -- # return 0 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 107358 ']' 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 107358 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 107358 ']' 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 107358 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@959 -- # uname 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107358 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:14.550 killing process with pid 107358 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107358' 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@973 -- # kill 107358 01:14:14.550 06:13:07 nvmf_dif -- common/autotest_common.sh@978 -- # wait 107358 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:14:14.550 06:13:07 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:14:14.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:14.550 Waiting for block devices as requested 01:14:14.550 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:14:14.550 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@297 -- # iptr 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:14.550 06:13:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:14:14.550 06:13:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:14.550 06:13:08 nvmf_dif -- nvmf/common.sh@300 -- # return 0 01:14:14.550 01:14:14.550 real 0m59.439s 01:14:14.550 user 3m51.381s 01:14:14.551 sys 0m14.388s 01:14:14.551 06:13:08 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:14.551 06:13:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:14:14.551 ************************************ 01:14:14.551 END TEST nvmf_dif 01:14:14.551 ************************************ 01:14:14.551 06:13:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:14:14.551 06:13:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:14.551 06:13:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:14.551 06:13:08 -- common/autotest_common.sh@10 -- # set +x 01:14:14.551 ************************************ 01:14:14.551 START TEST nvmf_abort_qd_sizes 01:14:14.551 ************************************ 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 01:14:14.551 * Looking for test storage... 01:14:14.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:14:14.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:14.551 --rc genhtml_branch_coverage=1 01:14:14.551 --rc genhtml_function_coverage=1 01:14:14.551 --rc genhtml_legend=1 01:14:14.551 --rc geninfo_all_blocks=1 01:14:14.551 --rc geninfo_unexecuted_blocks=1 01:14:14.551 01:14:14.551 ' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:14:14.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:14.551 --rc genhtml_branch_coverage=1 01:14:14.551 --rc genhtml_function_coverage=1 01:14:14.551 --rc genhtml_legend=1 01:14:14.551 --rc geninfo_all_blocks=1 01:14:14.551 --rc geninfo_unexecuted_blocks=1 01:14:14.551 01:14:14.551 ' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:14:14.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:14.551 --rc genhtml_branch_coverage=1 01:14:14.551 --rc genhtml_function_coverage=1 01:14:14.551 --rc genhtml_legend=1 01:14:14.551 --rc geninfo_all_blocks=1 01:14:14.551 --rc geninfo_unexecuted_blocks=1 01:14:14.551 01:14:14.551 ' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:14:14.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:14.551 --rc genhtml_branch_coverage=1 01:14:14.551 --rc genhtml_function_coverage=1 01:14:14.551 --rc genhtml_legend=1 01:14:14.551 --rc geninfo_all_blocks=1 01:14:14.551 --rc geninfo_unexecuted_blocks=1 01:14:14.551 01:14:14.551 ' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:14:14.551 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 01:14:14.551 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 01:14:14.552 Cannot find device "nvmf_init_br" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 01:14:14.552 Cannot find device "nvmf_init_br2" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 01:14:14.552 Cannot find device "nvmf_tgt_br" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 01:14:14.552 Cannot find device "nvmf_tgt_br2" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 01:14:14.552 Cannot find device "nvmf_init_br" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 01:14:14.552 Cannot find device "nvmf_init_br2" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 01:14:14.552 Cannot find device "nvmf_tgt_br" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 01:14:14.552 Cannot find device "nvmf_tgt_br2" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 01:14:14.552 Cannot find device "nvmf_br" 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 01:14:14.552 06:13:08 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 01:14:14.552 Cannot find device "nvmf_init_if" 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 01:14:14.552 Cannot find device "nvmf_init_if2" 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:14.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:14.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 01:14:14.552 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 01:14:14.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 01:14:14.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 01:14:14.821 01:14:14.821 --- 10.0.0.3 ping statistics --- 01:14:14.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:14.821 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 01:14:14.821 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 01:14:14.821 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 01:14:14.821 01:14:14.821 --- 10.0.0.4 ping statistics --- 01:14:14.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:14.821 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 01:14:14.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:14:14.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 01:14:14.821 01:14:14.821 --- 10.0.0.1 ping statistics --- 01:14:14.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:14.821 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 01:14:14.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:14:14.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 01:14:14.821 01:14:14.821 --- 10.0.0.2 ping statistics --- 01:14:14.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:14:14.821 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 01:14:14.821 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 01:14:14.822 06:13:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:14:15.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:15.643 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:14:15.643 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=108732 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 108732 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 108732 ']' 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:15.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:15.643 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:15.643 [2024-12-09 06:13:10.204776] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:14:15.643 [2024-12-09 06:13:10.204890] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:14:15.901 [2024-12-09 06:13:10.362074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 01:14:15.901 [2024-12-09 06:13:10.403071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:14:15.901 [2024-12-09 06:13:10.403149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:14:15.901 [2024-12-09 06:13:10.403176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:14:15.901 [2024-12-09 06:13:10.403186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 01:14:15.901 [2024-12-09 06:13:10.403195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:14:15.901 [2024-12-09 06:13:10.404067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:14:15.901 [2024-12-09 06:13:10.404136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 01:14:15.901 [2024-12-09 06:13:10.404206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 01:14:15.901 [2024-12-09 06:13:10.404209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:16.162 06:13:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:16.162 ************************************ 01:14:16.162 START TEST spdk_target_abort 01:14:16.162 ************************************ 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:16.162 spdk_targetn1 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:16.162 [2024-12-09 06:13:10.651529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:16.162 [2024-12-09 06:13:10.688850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:14:16.162 06:13:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:19.452 Initializing NVMe Controllers 01:14:19.452 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:14:19.452 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:14:19.452 Initialization complete. Launching workers. 01:14:19.452 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10387, failed: 0 01:14:19.452 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1171, failed to submit 9216 01:14:19.452 success 770, unsuccessful 401, failed 0 01:14:19.452 06:13:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:14:19.452 06:13:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:23.641 Initializing NVMe Controllers 01:14:23.641 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:14:23.641 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:14:23.641 Initialization complete. Launching workers. 01:14:23.641 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5929, failed: 0 01:14:23.641 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1267, failed to submit 4662 01:14:23.641 success 264, unsuccessful 1003, failed 0 01:14:23.641 06:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:14:23.641 06:13:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:26.213 Initializing NVMe Controllers 01:14:26.213 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 01:14:26.213 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:14:26.213 Initialization complete. Launching workers. 01:14:26.213 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27217, failed: 0 01:14:26.213 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2612, failed to submit 24605 01:14:26.213 success 249, unsuccessful 2363, failed 0 01:14:26.213 06:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:14:26.213 06:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:26.213 06:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:26.213 06:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:26.213 06:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:14:26.213 06:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:26.213 06:13:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 108732 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 108732 ']' 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 108732 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108732 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:27.160 killing process with pid 108732 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108732' 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 108732 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 108732 01:14:27.160 01:14:27.160 real 0m11.123s 01:14:27.160 user 0m42.820s 01:14:27.160 sys 0m1.723s 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:27.160 06:13:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:27.160 ************************************ 01:14:27.160 END TEST spdk_target_abort 01:14:27.160 ************************************ 01:14:27.160 06:13:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:14:27.160 06:13:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:27.419 06:13:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:27.419 06:13:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:27.419 ************************************ 01:14:27.419 START TEST kernel_target_abort 01:14:27.419 ************************************ 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:14:27.419 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:14:27.420 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:14:27.420 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 01:14:27.420 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 01:14:27.420 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 01:14:27.420 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 01:14:27.420 06:13:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:14:27.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:27.679 Waiting for block devices as requested 01:14:27.679 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:14:27.938 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:14:27.938 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:14:27.938 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 01:14:27.939 No valid GPT data, bailing 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 01:14:27.939 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 01:14:28.198 No valid GPT data, bailing 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 01:14:28.198 No valid GPT data, bailing 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 01:14:28.198 No valid GPT data, bailing 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 01:14:28.198 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 01:14:28.199 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 01:14:28.199 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 01:14:28.199 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:14:28.199 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 --hostid=4083adec-450d-4b97-8986-2f4423606fc2 -a 10.0.0.1 -t tcp -s 4420 01:14:28.457 01:14:28.457 Discovery Log Number of Records 2, Generation counter 2 01:14:28.457 =====Discovery Log Entry 0====== 01:14:28.457 trtype: tcp 01:14:28.457 adrfam: ipv4 01:14:28.457 subtype: current discovery subsystem 01:14:28.457 treq: not specified, sq flow control disable supported 01:14:28.457 portid: 1 01:14:28.457 trsvcid: 4420 01:14:28.457 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:14:28.457 traddr: 10.0.0.1 01:14:28.457 eflags: none 01:14:28.457 sectype: none 01:14:28.457 =====Discovery Log Entry 1====== 01:14:28.457 trtype: tcp 01:14:28.457 adrfam: ipv4 01:14:28.457 subtype: nvme subsystem 01:14:28.457 treq: not specified, sq flow control disable supported 01:14:28.457 portid: 1 01:14:28.457 trsvcid: 4420 01:14:28.457 subnqn: nqn.2016-06.io.spdk:testnqn 01:14:28.457 traddr: 10.0.0.1 01:14:28.457 eflags: none 01:14:28.457 sectype: none 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:14:28.457 06:13:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:31.744 Initializing NVMe Controllers 01:14:31.744 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:14:31.744 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:14:31.744 Initialization complete. Launching workers. 01:14:31.744 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29920, failed: 0 01:14:31.744 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29920, failed to submit 0 01:14:31.744 success 0, unsuccessful 29920, failed 0 01:14:31.744 06:13:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:14:31.744 06:13:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:35.030 Initializing NVMe Controllers 01:14:35.030 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:14:35.030 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:14:35.030 Initialization complete. Launching workers. 01:14:35.030 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55997, failed: 0 01:14:35.030 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24250, failed to submit 31747 01:14:35.030 success 0, unsuccessful 24250, failed 0 01:14:35.030 06:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:14:35.030 06:13:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:14:38.309 Initializing NVMe Controllers 01:14:38.309 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:14:38.309 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:14:38.309 Initialization complete. Launching workers. 01:14:38.309 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66684, failed: 0 01:14:38.309 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16658, failed to submit 50026 01:14:38.309 success 0, unsuccessful 16658, failed 0 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 01:14:38.309 06:13:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 01:14:38.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:39.947 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 01:14:40.207 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 01:14:40.207 01:14:40.207 real 0m12.878s 01:14:40.207 user 0m6.563s 01:14:40.207 sys 0m3.755s 01:14:40.207 06:13:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:40.207 06:13:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:14:40.207 ************************************ 01:14:40.207 END TEST kernel_target_abort 01:14:40.207 ************************************ 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 01:14:40.207 rmmod nvme_tcp 01:14:40.207 rmmod nvme_fabrics 01:14:40.207 rmmod nvme_keyring 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 108732 ']' 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 108732 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 108732 ']' 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 108732 01:14:40.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (108732) - No such process 01:14:40.207 Process with pid 108732 is not found 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 108732 is not found' 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 01:14:40.207 06:13:34 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 01:14:40.775 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:14:40.775 Waiting for block devices as requested 01:14:40.775 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 01:14:40.775 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 01:14:40.775 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 01:14:41.033 01:14:41.033 real 0m26.897s 01:14:41.033 user 0m50.457s 01:14:41.033 sys 0m6.900s 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:41.033 06:13:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:14:41.033 ************************************ 01:14:41.033 END TEST nvmf_abort_qd_sizes 01:14:41.033 ************************************ 01:14:41.293 06:13:35 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:14:41.293 06:13:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 01:14:41.293 06:13:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:41.293 06:13:35 -- common/autotest_common.sh@10 -- # set +x 01:14:41.293 ************************************ 01:14:41.293 START TEST keyring_file 01:14:41.293 ************************************ 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 01:14:41.293 * Looking for test storage... 01:14:41.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@344 -- # case "$op" in 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@345 -- # : 1 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@365 -- # decimal 1 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@353 -- # local d=1 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@355 -- # echo 1 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@366 -- # decimal 2 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@353 -- # local d=2 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@355 -- # echo 2 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@368 -- # return 0 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:14:41.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:41.293 --rc genhtml_branch_coverage=1 01:14:41.293 --rc genhtml_function_coverage=1 01:14:41.293 --rc genhtml_legend=1 01:14:41.293 --rc geninfo_all_blocks=1 01:14:41.293 --rc geninfo_unexecuted_blocks=1 01:14:41.293 01:14:41.293 ' 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:14:41.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:41.293 --rc genhtml_branch_coverage=1 01:14:41.293 --rc genhtml_function_coverage=1 01:14:41.293 --rc genhtml_legend=1 01:14:41.293 --rc geninfo_all_blocks=1 01:14:41.293 --rc geninfo_unexecuted_blocks=1 01:14:41.293 01:14:41.293 ' 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:14:41.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:41.293 --rc genhtml_branch_coverage=1 01:14:41.293 --rc genhtml_function_coverage=1 01:14:41.293 --rc genhtml_legend=1 01:14:41.293 --rc geninfo_all_blocks=1 01:14:41.293 --rc geninfo_unexecuted_blocks=1 01:14:41.293 01:14:41.293 ' 01:14:41.293 06:13:35 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:14:41.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:41.293 --rc genhtml_branch_coverage=1 01:14:41.293 --rc genhtml_function_coverage=1 01:14:41.293 --rc genhtml_legend=1 01:14:41.293 --rc geninfo_all_blocks=1 01:14:41.293 --rc geninfo_unexecuted_blocks=1 01:14:41.293 01:14:41.293 ' 01:14:41.293 06:13:35 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:14:41.293 06:13:35 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:41.293 06:13:35 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:41.293 06:13:35 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:41.293 06:13:35 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:41.293 06:13:35 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:41.293 06:13:35 keyring_file -- paths/export.sh@5 -- # export PATH 01:14:41.293 06:13:35 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@51 -- # : 0 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:41.293 06:13:35 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:14:41.294 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 01:14:41.294 06:13:35 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:14:41.294 06:13:35 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:14:41.294 06:13:35 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:14:41.294 06:13:35 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:14:41.294 06:13:35 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:14:41.294 06:13:35 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:14:41.294 06:13:35 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:14:41.294 06:13:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:14:41.294 06:13:35 keyring_file -- keyring/common.sh@17 -- # name=key0 01:14:41.294 06:13:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:14:41.294 06:13:35 keyring_file -- keyring/common.sh@17 -- # digest=0 01:14:41.294 06:13:35 keyring_file -- keyring/common.sh@18 -- # mktemp 01:14:41.294 06:13:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wvCFQ2UrxI 01:14:41.294 06:13:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:14:41.294 06:13:35 keyring_file -- nvmf/common.sh@733 -- # python - 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wvCFQ2UrxI 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wvCFQ2UrxI 01:14:41.553 06:13:35 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wvCFQ2UrxI 01:14:41.553 06:13:35 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@17 -- # name=key1 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@17 -- # digest=0 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@18 -- # mktemp 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.he9bujvsqX 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:14:41.553 06:13:35 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:14:41.553 06:13:35 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:14:41.553 06:13:35 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:14:41.553 06:13:35 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:14:41.553 06:13:35 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:14:41.553 06:13:35 keyring_file -- nvmf/common.sh@733 -- # python - 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.he9bujvsqX 01:14:41.553 06:13:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.he9bujvsqX 01:14:41.553 06:13:35 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.he9bujvsqX 01:14:41.553 06:13:35 keyring_file -- keyring/file.sh@30 -- # tgtpid=109644 01:14:41.553 06:13:35 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:41.553 06:13:35 keyring_file -- keyring/file.sh@32 -- # waitforlisten 109644 01:14:41.553 06:13:35 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109644 ']' 01:14:41.553 06:13:35 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:41.553 06:13:35 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:41.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:41.553 06:13:35 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:41.553 06:13:35 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:41.553 06:13:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:14:41.553 [2024-12-09 06:13:36.040188] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:14:41.553 [2024-12-09 06:13:36.040315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109644 ] 01:14:41.816 [2024-12-09 06:13:36.191737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:41.816 [2024-12-09 06:13:36.232542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:42.077 06:13:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:42.077 06:13:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:14:42.077 06:13:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:14:42.077 06:13:36 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.077 06:13:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:14:42.077 [2024-12-09 06:13:36.442302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:42.077 null0 01:14:42.077 [2024-12-09 06:13:36.474247] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:14:42.078 [2024-12-09 06:13:36.474475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:42.078 06:13:36 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:14:42.078 [2024-12-09 06:13:36.506234] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:14:42.078 2024/12/09 06:13:36 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 01:14:42.078 request: 01:14:42.078 { 01:14:42.078 "method": "nvmf_subsystem_add_listener", 01:14:42.078 "params": { 01:14:42.078 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:14:42.078 "secure_channel": false, 01:14:42.078 "listen_address": { 01:14:42.078 "trtype": "tcp", 01:14:42.078 "traddr": "127.0.0.1", 01:14:42.078 "trsvcid": "4420" 01:14:42.078 } 01:14:42.078 } 01:14:42.078 } 01:14:42.078 Got JSON-RPC error response 01:14:42.078 GoRPCClient: error on JSON-RPC call 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:14:42.078 06:13:36 keyring_file -- keyring/file.sh@47 -- # bperfpid=109662 01:14:42.078 06:13:36 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:14:42.078 06:13:36 keyring_file -- keyring/file.sh@49 -- # waitforlisten 109662 /var/tmp/bperf.sock 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109662 ']' 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:14:42.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:42.078 06:13:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:14:42.078 [2024-12-09 06:13:36.597739] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:14:42.078 [2024-12-09 06:13:36.597890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109662 ] 01:14:42.337 [2024-12-09 06:13:36.750866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:42.337 [2024-12-09 06:13:36.782331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:14:42.337 06:13:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:42.337 06:13:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:14:42.337 06:13:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wvCFQ2UrxI 01:14:42.337 06:13:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wvCFQ2UrxI 01:14:42.595 06:13:37 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.he9bujvsqX 01:14:42.595 06:13:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.he9bujvsqX 01:14:43.160 06:13:37 keyring_file -- keyring/file.sh@52 -- # get_key key0 01:14:43.160 06:13:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:14:43.160 06:13:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:43.160 06:13:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:43.160 06:13:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:43.418 06:13:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.wvCFQ2UrxI == \/\t\m\p\/\t\m\p\.\w\v\C\F\Q\2\U\r\x\I ]] 01:14:43.418 06:13:37 keyring_file -- keyring/file.sh@53 -- # get_key key1 01:14:43.418 06:13:37 keyring_file -- keyring/file.sh@53 -- # jq -r .path 01:14:43.418 06:13:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:43.418 06:13:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:43.418 06:13:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:14:43.676 06:13:38 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.he9bujvsqX == \/\t\m\p\/\t\m\p\.\h\e\9\b\u\j\v\s\q\X ]] 01:14:43.676 06:13:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 01:14:43.676 06:13:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:14:43.676 06:13:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:43.676 06:13:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:43.676 06:13:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:43.676 06:13:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:43.934 06:13:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:14:43.934 06:13:38 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 01:14:43.934 06:13:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:14:43.934 06:13:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:43.934 06:13:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:43.934 06:13:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:43.934 06:13:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:14:44.192 06:13:38 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 01:14:44.192 06:13:38 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:44.192 06:13:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:44.450 [2024-12-09 06:13:38.958830] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:14:44.450 nvme0n1 01:14:44.708 06:13:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 01:14:44.708 06:13:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:44.708 06:13:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:14:44.708 06:13:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:44.708 06:13:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:44.708 06:13:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:44.966 06:13:39 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 01:14:44.966 06:13:39 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 01:14:44.966 06:13:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:44.966 06:13:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:14:44.966 06:13:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:44.966 06:13:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:44.966 06:13:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:14:45.225 06:13:39 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 01:14:45.225 06:13:39 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:14:45.483 Running I/O for 1 seconds... 01:14:46.416 11670.00 IOPS, 45.59 MiB/s 01:14:46.416 Latency(us) 01:14:46.416 [2024-12-09T06:13:41.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:46.416 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:14:46.416 nvme0n1 : 1.01 11717.24 45.77 0.00 0.00 10891.92 4140.68 16443.58 01:14:46.416 [2024-12-09T06:13:41.002Z] =================================================================================================================== 01:14:46.416 [2024-12-09T06:13:41.002Z] Total : 11717.24 45.77 0.00 0.00 10891.92 4140.68 16443.58 01:14:46.416 { 01:14:46.416 "results": [ 01:14:46.416 { 01:14:46.416 "job": "nvme0n1", 01:14:46.416 "core_mask": "0x2", 01:14:46.416 "workload": "randrw", 01:14:46.416 "percentage": 50, 01:14:46.416 "status": "finished", 01:14:46.416 "queue_depth": 128, 01:14:46.416 "io_size": 4096, 01:14:46.416 "runtime": 1.006978, 01:14:46.416 "iops": 11717.23711938096, 01:14:46.416 "mibps": 45.77045749758187, 01:14:46.416 "io_failed": 0, 01:14:46.416 "io_timeout": 0, 01:14:46.416 "avg_latency_us": 10891.915660032822, 01:14:46.416 "min_latency_us": 4140.683636363637, 01:14:46.416 "max_latency_us": 16443.578181818182 01:14:46.416 } 01:14:46.416 ], 01:14:46.416 "core_count": 1 01:14:46.416 } 01:14:46.416 06:13:40 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:14:46.416 06:13:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:14:46.674 06:13:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 01:14:46.674 06:13:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:46.674 06:13:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:14:46.674 06:13:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:46.674 06:13:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:46.674 06:13:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:46.932 06:13:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:14:46.932 06:13:41 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 01:14:46.932 06:13:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:14:46.932 06:13:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:46.932 06:13:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:46.932 06:13:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:14:46.932 06:13:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:47.499 06:13:41 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 01:14:47.499 06:13:41 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:14:47.499 06:13:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:14:47.499 06:13:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:14:47.499 06:13:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:14:47.499 06:13:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:47.499 06:13:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:14:47.499 06:13:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:47.499 06:13:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:14:47.499 06:13:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:14:47.499 [2024-12-09 06:13:42.029185] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:14:47.499 [2024-12-09 06:13:42.029583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017010 (107): Transport endpoint is not connected 01:14:47.499 [2024-12-09 06:13:42.030574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017010 (9): Bad file descriptor 01:14:47.499 [2024-12-09 06:13:42.031569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:14:47.499 [2024-12-09 06:13:42.031587] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:14:47.499 [2024-12-09 06:13:42.031614] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:14:47.499 [2024-12-09 06:13:42.031624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:14:47.499 2024/12/09 06:13:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:14:47.499 request: 01:14:47.499 { 01:14:47.499 "method": "bdev_nvme_attach_controller", 01:14:47.499 "params": { 01:14:47.499 "name": "nvme0", 01:14:47.499 "trtype": "tcp", 01:14:47.499 "traddr": "127.0.0.1", 01:14:47.499 "adrfam": "ipv4", 01:14:47.499 "trsvcid": "4420", 01:14:47.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:47.499 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:47.499 "prchk_reftag": false, 01:14:47.499 "prchk_guard": false, 01:14:47.499 "hdgst": false, 01:14:47.499 "ddgst": false, 01:14:47.499 "psk": "key1", 01:14:47.499 "allow_unrecognized_csi": false 01:14:47.499 } 01:14:47.499 } 01:14:47.499 Got JSON-RPC error response 01:14:47.499 GoRPCClient: error on JSON-RPC call 01:14:47.499 06:13:42 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:14:47.499 06:13:42 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:14:47.499 06:13:42 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:14:47.499 06:13:42 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:14:47.499 06:13:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 01:14:47.499 06:13:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:47.499 06:13:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:14:47.499 06:13:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:47.499 06:13:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:47.499 06:13:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:47.757 06:13:42 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:14:47.757 06:13:42 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 01:14:47.757 06:13:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:14:48.013 06:13:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:48.013 06:13:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:48.013 06:13:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:14:48.013 06:13:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:48.271 06:13:42 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 01:14:48.271 06:13:42 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 01:14:48.271 06:13:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:14:48.528 06:13:42 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 01:14:48.528 06:13:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:14:48.786 06:13:43 keyring_file -- keyring/file.sh@78 -- # jq length 01:14:48.786 06:13:43 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 01:14:48.786 06:13:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:49.043 06:13:43 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 01:14:49.043 06:13:43 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.wvCFQ2UrxI 01:14:49.043 06:13:43 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wvCFQ2UrxI 01:14:49.043 06:13:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:14:49.043 06:13:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wvCFQ2UrxI 01:14:49.043 06:13:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:14:49.043 06:13:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:49.043 06:13:43 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:14:49.043 06:13:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:49.043 06:13:43 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wvCFQ2UrxI 01:14:49.043 06:13:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wvCFQ2UrxI 01:14:49.302 [2024-12-09 06:13:43.672037] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wvCFQ2UrxI': 0100660 01:14:49.302 [2024-12-09 06:13:43.672078] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:14:49.302 2024/12/09 06:13:43 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.wvCFQ2UrxI], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 01:14:49.302 request: 01:14:49.302 { 01:14:49.302 "method": "keyring_file_add_key", 01:14:49.302 "params": { 01:14:49.302 "name": "key0", 01:14:49.302 "path": "/tmp/tmp.wvCFQ2UrxI" 01:14:49.302 } 01:14:49.302 } 01:14:49.302 Got JSON-RPC error response 01:14:49.302 GoRPCClient: error on JSON-RPC call 01:14:49.302 06:13:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:14:49.302 06:13:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:14:49.302 06:13:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:14:49.302 06:13:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:14:49.302 06:13:43 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.wvCFQ2UrxI 01:14:49.302 06:13:43 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wvCFQ2UrxI 01:14:49.302 06:13:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wvCFQ2UrxI 01:14:49.562 06:13:43 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.wvCFQ2UrxI 01:14:49.562 06:13:43 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 01:14:49.562 06:13:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:14:49.562 06:13:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:49.562 06:13:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:49.562 06:13:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:49.562 06:13:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:49.820 06:13:44 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 01:14:49.820 06:13:44 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:49.820 06:13:44 keyring_file -- common/autotest_common.sh@652 -- # local es=0 01:14:49.820 06:13:44 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:49.820 06:13:44 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:14:49.820 06:13:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:49.820 06:13:44 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:14:49.820 06:13:44 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:14:49.820 06:13:44 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:49.820 06:13:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:50.078 [2024-12-09 06:13:44.524313] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wvCFQ2UrxI': No such file or directory 01:14:50.078 [2024-12-09 06:13:44.524381] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:14:50.078 [2024-12-09 06:13:44.524425] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:14:50.078 [2024-12-09 06:13:44.524437] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 01:14:50.079 [2024-12-09 06:13:44.524448] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:14:50.079 [2024-12-09 06:13:44.524457] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:14:50.079 2024/12/09 06:13:44 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 01:14:50.079 request: 01:14:50.079 { 01:14:50.079 "method": "bdev_nvme_attach_controller", 01:14:50.079 "params": { 01:14:50.079 "name": "nvme0", 01:14:50.079 "trtype": "tcp", 01:14:50.079 "traddr": "127.0.0.1", 01:14:50.079 "adrfam": "ipv4", 01:14:50.079 "trsvcid": "4420", 01:14:50.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:50.079 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:50.079 "prchk_reftag": false, 01:14:50.079 "prchk_guard": false, 01:14:50.079 "hdgst": false, 01:14:50.079 "ddgst": false, 01:14:50.079 "psk": "key0", 01:14:50.079 "allow_unrecognized_csi": false 01:14:50.079 } 01:14:50.079 } 01:14:50.079 Got JSON-RPC error response 01:14:50.079 GoRPCClient: error on JSON-RPC call 01:14:50.079 06:13:44 keyring_file -- common/autotest_common.sh@655 -- # es=1 01:14:50.079 06:13:44 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:14:50.079 06:13:44 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:14:50.079 06:13:44 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:14:50.079 06:13:44 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 01:14:50.079 06:13:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:14:50.337 06:13:44 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@17 -- # name=key0 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@17 -- # digest=0 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@18 -- # mktemp 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.stl2D024eO 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:14:50.337 06:13:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:14:50.337 06:13:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 01:14:50.337 06:13:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:14:50.337 06:13:44 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:14:50.337 06:13:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 01:14:50.337 06:13:44 keyring_file -- nvmf/common.sh@733 -- # python - 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.stl2D024eO 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.stl2D024eO 01:14:50.337 06:13:44 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.stl2D024eO 01:14:50.337 06:13:44 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.stl2D024eO 01:14:50.337 06:13:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.stl2D024eO 01:14:50.595 06:13:45 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:50.595 06:13:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:51.164 nvme0n1 01:14:51.164 06:13:45 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 01:14:51.164 06:13:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:14:51.164 06:13:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:51.164 06:13:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:51.164 06:13:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:51.164 06:13:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:51.465 06:13:45 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 01:14:51.465 06:13:45 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 01:14:51.465 06:13:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:14:51.723 06:13:46 keyring_file -- keyring/file.sh@102 -- # get_key key0 01:14:51.723 06:13:46 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 01:14:51.723 06:13:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:51.723 06:13:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:51.723 06:13:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:51.981 06:13:46 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 01:14:51.981 06:13:46 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 01:14:51.981 06:13:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:14:51.981 06:13:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:51.981 06:13:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:51.981 06:13:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:51.981 06:13:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:52.238 06:13:46 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 01:14:52.238 06:13:46 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:14:52.238 06:13:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:14:52.496 06:13:46 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 01:14:52.496 06:13:46 keyring_file -- keyring/file.sh@105 -- # jq length 01:14:52.496 06:13:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:52.754 06:13:47 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 01:14:52.754 06:13:47 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.stl2D024eO 01:14:52.754 06:13:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.stl2D024eO 01:14:53.321 06:13:47 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.he9bujvsqX 01:14:53.321 06:13:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.he9bujvsqX 01:14:53.580 06:13:47 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:53.580 06:13:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:14:53.838 nvme0n1 01:14:53.838 06:13:48 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 01:14:53.838 06:13:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:14:54.098 06:13:48 keyring_file -- keyring/file.sh@113 -- # config='{ 01:14:54.098 "subsystems": [ 01:14:54.098 { 01:14:54.098 "subsystem": "keyring", 01:14:54.098 "config": [ 01:14:54.098 { 01:14:54.098 "method": "keyring_file_add_key", 01:14:54.098 "params": { 01:14:54.098 "name": "key0", 01:14:54.098 "path": "/tmp/tmp.stl2D024eO" 01:14:54.098 } 01:14:54.098 }, 01:14:54.098 { 01:14:54.098 "method": "keyring_file_add_key", 01:14:54.098 "params": { 01:14:54.098 "name": "key1", 01:14:54.098 "path": "/tmp/tmp.he9bujvsqX" 01:14:54.098 } 01:14:54.098 } 01:14:54.098 ] 01:14:54.098 }, 01:14:54.098 { 01:14:54.098 "subsystem": "iobuf", 01:14:54.098 "config": [ 01:14:54.098 { 01:14:54.098 "method": "iobuf_set_options", 01:14:54.098 "params": { 01:14:54.098 "enable_numa": false, 01:14:54.098 "large_bufsize": 135168, 01:14:54.098 "large_pool_count": 1024, 01:14:54.098 "small_bufsize": 8192, 01:14:54.098 "small_pool_count": 8192 01:14:54.098 } 01:14:54.098 } 01:14:54.098 ] 01:14:54.098 }, 01:14:54.098 { 01:14:54.098 "subsystem": "sock", 01:14:54.098 "config": [ 01:14:54.098 { 01:14:54.098 "method": "sock_set_default_impl", 01:14:54.098 "params": { 01:14:54.098 "impl_name": "posix" 01:14:54.098 } 01:14:54.098 }, 01:14:54.098 { 01:14:54.098 "method": "sock_impl_set_options", 01:14:54.098 "params": { 01:14:54.098 "enable_ktls": false, 01:14:54.098 "enable_placement_id": 0, 01:14:54.098 "enable_quickack": false, 01:14:54.098 "enable_recv_pipe": true, 01:14:54.098 "enable_zerocopy_send_client": false, 01:14:54.098 "enable_zerocopy_send_server": true, 01:14:54.098 "impl_name": "ssl", 01:14:54.098 "recv_buf_size": 4096, 01:14:54.098 "send_buf_size": 4096, 01:14:54.098 "tls_version": 0, 01:14:54.098 "zerocopy_threshold": 0 01:14:54.098 } 01:14:54.098 }, 01:14:54.098 { 01:14:54.098 "method": "sock_impl_set_options", 01:14:54.098 "params": { 01:14:54.098 "enable_ktls": false, 01:14:54.098 "enable_placement_id": 0, 01:14:54.098 "enable_quickack": false, 01:14:54.098 "enable_recv_pipe": true, 01:14:54.098 "enable_zerocopy_send_client": false, 01:14:54.099 "enable_zerocopy_send_server": true, 01:14:54.099 "impl_name": "posix", 01:14:54.099 "recv_buf_size": 2097152, 01:14:54.099 "send_buf_size": 2097152, 01:14:54.099 "tls_version": 0, 01:14:54.099 "zerocopy_threshold": 0 01:14:54.099 } 01:14:54.099 } 01:14:54.099 ] 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "subsystem": "vmd", 01:14:54.099 "config": [] 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "subsystem": "accel", 01:14:54.099 "config": [ 01:14:54.099 { 01:14:54.099 "method": "accel_set_options", 01:14:54.099 "params": { 01:14:54.099 "buf_count": 2048, 01:14:54.099 "large_cache_size": 16, 01:14:54.099 "sequence_count": 2048, 01:14:54.099 "small_cache_size": 128, 01:14:54.099 "task_count": 2048 01:14:54.099 } 01:14:54.099 } 01:14:54.099 ] 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "subsystem": "bdev", 01:14:54.099 "config": [ 01:14:54.099 { 01:14:54.099 "method": "bdev_set_options", 01:14:54.099 "params": { 01:14:54.099 "bdev_auto_examine": true, 01:14:54.099 "bdev_io_cache_size": 256, 01:14:54.099 "bdev_io_pool_size": 65535, 01:14:54.099 "iobuf_large_cache_size": 16, 01:14:54.099 "iobuf_small_cache_size": 128 01:14:54.099 } 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "method": "bdev_raid_set_options", 01:14:54.099 "params": { 01:14:54.099 "process_max_bandwidth_mb_sec": 0, 01:14:54.099 "process_window_size_kb": 1024 01:14:54.099 } 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "method": "bdev_iscsi_set_options", 01:14:54.099 "params": { 01:14:54.099 "timeout_sec": 30 01:14:54.099 } 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "method": "bdev_nvme_set_options", 01:14:54.099 "params": { 01:14:54.099 "action_on_timeout": "none", 01:14:54.099 "allow_accel_sequence": false, 01:14:54.099 "arbitration_burst": 0, 01:14:54.099 "bdev_retry_count": 3, 01:14:54.099 "ctrlr_loss_timeout_sec": 0, 01:14:54.099 "delay_cmd_submit": true, 01:14:54.099 "dhchap_dhgroups": [ 01:14:54.099 "null", 01:14:54.099 "ffdhe2048", 01:14:54.099 "ffdhe3072", 01:14:54.099 "ffdhe4096", 01:14:54.099 "ffdhe6144", 01:14:54.099 "ffdhe8192" 01:14:54.099 ], 01:14:54.099 "dhchap_digests": [ 01:14:54.099 "sha256", 01:14:54.099 "sha384", 01:14:54.099 "sha512" 01:14:54.099 ], 01:14:54.099 "disable_auto_failback": false, 01:14:54.099 "fast_io_fail_timeout_sec": 0, 01:14:54.099 "generate_uuids": false, 01:14:54.099 "high_priority_weight": 0, 01:14:54.099 "io_path_stat": false, 01:14:54.099 "io_queue_requests": 512, 01:14:54.099 "keep_alive_timeout_ms": 10000, 01:14:54.099 "low_priority_weight": 0, 01:14:54.099 "medium_priority_weight": 0, 01:14:54.099 "nvme_adminq_poll_period_us": 10000, 01:14:54.099 "nvme_error_stat": false, 01:14:54.099 "nvme_ioq_poll_period_us": 0, 01:14:54.099 "rdma_cm_event_timeout_ms": 0, 01:14:54.099 "rdma_max_cq_size": 0, 01:14:54.099 "rdma_srq_size": 0, 01:14:54.099 "reconnect_delay_sec": 0, 01:14:54.099 "timeout_admin_us": 0, 01:14:54.099 "timeout_us": 0, 01:14:54.099 "transport_ack_timeout": 0, 01:14:54.099 "transport_retry_count": 4, 01:14:54.099 "transport_tos": 0 01:14:54.099 } 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "method": "bdev_nvme_attach_controller", 01:14:54.099 "params": { 01:14:54.099 "adrfam": "IPv4", 01:14:54.099 "ctrlr_loss_timeout_sec": 0, 01:14:54.099 "ddgst": false, 01:14:54.099 "fast_io_fail_timeout_sec": 0, 01:14:54.099 "hdgst": false, 01:14:54.099 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:54.099 "multipath": "multipath", 01:14:54.099 "name": "nvme0", 01:14:54.099 "prchk_guard": false, 01:14:54.099 "prchk_reftag": false, 01:14:54.099 "psk": "key0", 01:14:54.099 "reconnect_delay_sec": 0, 01:14:54.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:54.099 "traddr": "127.0.0.1", 01:14:54.099 "trsvcid": "4420", 01:14:54.099 "trtype": "TCP" 01:14:54.099 } 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "method": "bdev_nvme_set_hotplug", 01:14:54.099 "params": { 01:14:54.099 "enable": false, 01:14:54.099 "period_us": 100000 01:14:54.099 } 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "method": "bdev_wait_for_examine" 01:14:54.099 } 01:14:54.099 ] 01:14:54.099 }, 01:14:54.099 { 01:14:54.099 "subsystem": "nbd", 01:14:54.099 "config": [] 01:14:54.099 } 01:14:54.099 ] 01:14:54.099 }' 01:14:54.099 06:13:48 keyring_file -- keyring/file.sh@115 -- # killprocess 109662 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109662 ']' 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109662 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@959 -- # uname 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109662 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:14:54.099 killing process with pid 109662 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109662' 01:14:54.099 Received shutdown signal, test time was about 1.000000 seconds 01:14:54.099 01:14:54.099 Latency(us) 01:14:54.099 [2024-12-09T06:13:48.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:54.099 [2024-12-09T06:13:48.685Z] =================================================================================================================== 01:14:54.099 [2024-12-09T06:13:48.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@973 -- # kill 109662 01:14:54.099 06:13:48 keyring_file -- common/autotest_common.sh@978 -- # wait 109662 01:14:54.359 06:13:48 keyring_file -- keyring/file.sh@118 -- # bperfpid=110126 01:14:54.359 06:13:48 keyring_file -- keyring/file.sh@120 -- # waitforlisten 110126 /var/tmp/bperf.sock 01:14:54.359 06:13:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110126 ']' 01:14:54.359 06:13:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:14:54.359 06:13:48 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:14:54.359 06:13:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:54.359 06:13:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:14:54.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:14:54.359 06:13:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:54.359 06:13:48 keyring_file -- keyring/file.sh@116 -- # echo '{ 01:14:54.359 "subsystems": [ 01:14:54.359 { 01:14:54.359 "subsystem": "keyring", 01:14:54.359 "config": [ 01:14:54.359 { 01:14:54.359 "method": "keyring_file_add_key", 01:14:54.359 "params": { 01:14:54.359 "name": "key0", 01:14:54.359 "path": "/tmp/tmp.stl2D024eO" 01:14:54.359 } 01:14:54.359 }, 01:14:54.359 { 01:14:54.359 "method": "keyring_file_add_key", 01:14:54.359 "params": { 01:14:54.359 "name": "key1", 01:14:54.359 "path": "/tmp/tmp.he9bujvsqX" 01:14:54.359 } 01:14:54.359 } 01:14:54.359 ] 01:14:54.359 }, 01:14:54.359 { 01:14:54.359 "subsystem": "iobuf", 01:14:54.359 "config": [ 01:14:54.359 { 01:14:54.359 "method": "iobuf_set_options", 01:14:54.359 "params": { 01:14:54.359 "enable_numa": false, 01:14:54.359 "large_bufsize": 135168, 01:14:54.359 "large_pool_count": 1024, 01:14:54.359 "small_bufsize": 8192, 01:14:54.359 "small_pool_count": 8192 01:14:54.359 } 01:14:54.359 } 01:14:54.359 ] 01:14:54.359 }, 01:14:54.359 { 01:14:54.359 "subsystem": "sock", 01:14:54.359 "config": [ 01:14:54.359 { 01:14:54.359 "method": "sock_set_default_impl", 01:14:54.359 "params": { 01:14:54.359 "impl_name": "posix" 01:14:54.359 } 01:14:54.359 }, 01:14:54.359 { 01:14:54.359 "method": "sock_impl_set_options", 01:14:54.359 "params": { 01:14:54.359 "enable_ktls": false, 01:14:54.359 "enable_placement_id": 0, 01:14:54.359 "enable_quickack": false, 01:14:54.359 "enable_recv_pipe": true, 01:14:54.359 "enable_zerocopy_send_client": false, 01:14:54.359 "enable_zerocopy_send_server": true, 01:14:54.359 "impl_name": "ssl", 01:14:54.359 "recv_buf_size": 4096, 01:14:54.359 "send_buf_size": 4096, 01:14:54.359 "tls_version": 0, 01:14:54.359 "zerocopy_threshold": 0 01:14:54.359 } 01:14:54.359 }, 01:14:54.359 { 01:14:54.359 "method": "sock_impl_set_options", 01:14:54.359 "params": { 01:14:54.359 "enable_ktls": false, 01:14:54.359 "enable_placement_id": 0, 01:14:54.359 "enable_quickack": false, 01:14:54.359 "enable_recv_pipe": true, 01:14:54.359 "enable_zerocopy_send_client": false, 01:14:54.359 "enable_zerocopy_send_server": true, 01:14:54.359 "impl_name": "posix", 01:14:54.359 "recv_buf_size": 2097152, 01:14:54.359 "send_buf_size": 2097152, 01:14:54.359 "tls_version": 0, 01:14:54.359 "zerocopy_threshold": 0 01:14:54.359 } 01:14:54.359 } 01:14:54.360 ] 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "subsystem": "vmd", 01:14:54.360 "config": [] 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "subsystem": "accel", 01:14:54.360 "config": [ 01:14:54.360 { 01:14:54.360 "method": "accel_set_options", 01:14:54.360 "params": { 01:14:54.360 "buf_count": 2048, 01:14:54.360 "large_cache_size": 16, 01:14:54.360 "sequence_count": 2048, 01:14:54.360 "small_cache_size": 128, 01:14:54.360 "task_count": 2048 01:14:54.360 } 01:14:54.360 } 01:14:54.360 ] 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "subsystem": "bdev", 01:14:54.360 "config": [ 01:14:54.360 { 01:14:54.360 "method": "bdev_set_options", 01:14:54.360 "params": { 01:14:54.360 "bdev_auto_examine": true, 01:14:54.360 "bdev_io_cache_size": 256, 01:14:54.360 "bdev_io_pool_size": 65535, 01:14:54.360 "iobuf_large_cache_size": 16, 01:14:54.360 "iobuf_small_cache_size": 128 01:14:54.360 } 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "method": "bdev_raid_set_options", 01:14:54.360 "params": { 01:14:54.360 "process_max_bandwidth_mb_sec": 0, 01:14:54.360 "process_window_size_kb": 1024 01:14:54.360 } 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "method": "bdev_iscsi_set_options", 01:14:54.360 "params": { 01:14:54.360 "timeout_sec": 30 01:14:54.360 } 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "method": "bdev_nvme_set_options", 01:14:54.360 "params": { 01:14:54.360 "action_on_timeout": "none", 01:14:54.360 "allow_accel_sequence": false, 01:14:54.360 "arbitration_burst": 0, 01:14:54.360 "bdev_retry_count": 3, 01:14:54.360 "ctrlr_loss_timeout_sec": 0, 01:14:54.360 "delay_cmd_submit": true, 01:14:54.360 "dhchap_dhgroups": [ 01:14:54.360 "null", 01:14:54.360 "ffdhe2048", 01:14:54.360 "ffdhe3072", 01:14:54.360 "ffdhe4096", 01:14:54.360 "ffdhe6144", 01:14:54.360 "ffdhe8192" 01:14:54.360 ], 01:14:54.360 "dhchap_digests": [ 01:14:54.360 "sha256", 01:14:54.360 "sha384", 01:14:54.360 "sha512" 01:14:54.360 ], 01:14:54.360 "disable_auto_failback": false, 01:14:54.360 "fast_io_fail_timeout_sec": 0, 01:14:54.360 "generate_uuids": false, 01:14:54.360 "high_priority_weight": 0, 01:14:54.360 "io_path_stat": false, 01:14:54.360 "io_queue_requests": 512, 01:14:54.360 "keep_alive_timeout_ms": 10000, 01:14:54.360 "low_priority_weight": 0, 01:14:54.360 "medium_priority_weight": 0, 01:14:54.360 "nvme_adminq_poll_period_us": 10000, 01:14:54.360 "nvme_error_stat": false, 01:14:54.360 "nvme_ioq_poll_period_us": 0, 01:14:54.360 "rdma_cm_event_timeout_ms": 0, 01:14:54.360 "rdma_max_cq_size": 0, 01:14:54.360 "rdma_srq_size": 0, 01:14:54.360 "reconnect_delay_sec": 0, 01:14:54.360 "timeout_admin_us": 0, 01:14:54.360 "timeout_us": 0, 01:14:54.360 "transport_ack_timeout": 0, 01:14:54.360 "transport_retry_count": 4, 01:14:54.360 "transport_tos": 0 01:14:54.360 } 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "method": "bdev_nvme_attach_controller", 01:14:54.360 "params": { 01:14:54.360 "adrfam": "IPv4", 01:14:54.360 "ctrlr_loss_timeout_sec": 0, 01:14:54.360 "ddgst": false, 01:14:54.360 "fast_io_fail_timeout_sec": 0, 01:14:54.360 "hdgst": false, 01:14:54.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:14:54.360 "multipath": "multipath", 01:14:54.360 "name": "nvme0", 01:14:54.360 "prchk_guard": false, 01:14:54.360 "prchk_reftag": false, 01:14:54.360 "psk": "key0", 01:14:54.360 "reconnect_delay_sec": 0, 01:14:54.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:14:54.360 "traddr": "127.0.0.1", 01:14:54.360 "trsvcid": "4420", 01:14:54.360 "trtype": "TCP" 01:14:54.360 } 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "method": "bdev_nvme_set_hotplug", 01:14:54.360 "params": { 01:14:54.360 "enable": false, 01:14:54.360 "period_us": 100000 01:14:54.360 } 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "method": "bdev_wait_for_examine" 01:14:54.360 } 01:14:54.360 ] 01:14:54.360 }, 01:14:54.360 { 01:14:54.360 "subsystem": "nbd", 01:14:54.360 "config": [] 01:14:54.360 } 01:14:54.360 ] 01:14:54.360 }' 01:14:54.360 06:13:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:14:54.360 [2024-12-09 06:13:48.779416] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:14:54.360 [2024-12-09 06:13:48.779511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110126 ] 01:14:54.360 [2024-12-09 06:13:48.923411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:54.619 [2024-12-09 06:13:48.955234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:14:54.619 [2024-12-09 06:13:49.097309] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:14:55.555 06:13:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:55.555 06:13:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 01:14:55.555 06:13:49 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 01:14:55.555 06:13:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:55.556 06:13:49 keyring_file -- keyring/file.sh@121 -- # jq length 01:14:55.556 06:13:50 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:14:55.556 06:13:50 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 01:14:55.556 06:13:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:14:55.556 06:13:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:55.556 06:13:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:55.556 06:13:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:14:55.556 06:13:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:55.814 06:13:50 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 01:14:55.814 06:13:50 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 01:14:55.814 06:13:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:14:55.814 06:13:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:14:55.814 06:13:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:14:55.814 06:13:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:55.814 06:13:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:14:56.072 06:13:50 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 01:14:56.072 06:13:50 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 01:14:56.072 06:13:50 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 01:14:56.072 06:13:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:14:56.639 06:13:50 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 01:14:56.639 06:13:50 keyring_file -- keyring/file.sh@1 -- # cleanup 01:14:56.639 06:13:50 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.stl2D024eO /tmp/tmp.he9bujvsqX 01:14:56.639 06:13:50 keyring_file -- keyring/file.sh@20 -- # killprocess 110126 01:14:56.639 06:13:50 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110126 ']' 01:14:56.639 06:13:50 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110126 01:14:56.639 06:13:50 keyring_file -- common/autotest_common.sh@959 -- # uname 01:14:56.640 06:13:50 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:56.640 06:13:50 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110126 01:14:56.640 killing process with pid 110126 01:14:56.640 Received shutdown signal, test time was about 1.000000 seconds 01:14:56.640 01:14:56.640 Latency(us) 01:14:56.640 [2024-12-09T06:13:51.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:14:56.640 [2024-12-09T06:13:51.226Z] =================================================================================================================== 01:14:56.640 [2024-12-09T06:13:51.226Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:14:56.640 06:13:50 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:14:56.640 06:13:50 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:14:56.640 06:13:50 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110126' 01:14:56.640 06:13:50 keyring_file -- common/autotest_common.sh@973 -- # kill 110126 01:14:56.640 06:13:50 keyring_file -- common/autotest_common.sh@978 -- # wait 110126 01:14:56.640 06:13:51 keyring_file -- keyring/file.sh@21 -- # killprocess 109644 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109644 ']' 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109644 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@959 -- # uname 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109644 01:14:56.640 killing process with pid 109644 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109644' 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@973 -- # kill 109644 01:14:56.640 06:13:51 keyring_file -- common/autotest_common.sh@978 -- # wait 109644 01:14:56.899 01:14:56.899 real 0m15.745s 01:14:56.899 user 0m40.947s 01:14:56.899 sys 0m3.051s 01:14:56.899 06:13:51 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 01:14:56.899 06:13:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:14:56.899 ************************************ 01:14:56.899 END TEST keyring_file 01:14:56.899 ************************************ 01:14:56.899 06:13:51 -- spdk/autotest.sh@293 -- # [[ y == y ]] 01:14:56.899 06:13:51 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:14:56.899 06:13:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 01:14:56.899 06:13:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 01:14:56.899 06:13:51 -- common/autotest_common.sh@10 -- # set +x 01:14:56.899 ************************************ 01:14:56.899 START TEST keyring_linux 01:14:56.899 ************************************ 01:14:56.899 06:13:51 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 01:14:56.899 Joined session keyring: 119402543 01:14:57.159 * Looking for test storage... 01:14:57.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@345 -- # : 1 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@365 -- # decimal 1 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@353 -- # local d=1 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@355 -- # echo 1 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@366 -- # decimal 2 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@353 -- # local d=2 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@355 -- # echo 2 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@368 -- # return 0 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 01:14:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:57.160 --rc genhtml_branch_coverage=1 01:14:57.160 --rc genhtml_function_coverage=1 01:14:57.160 --rc genhtml_legend=1 01:14:57.160 --rc geninfo_all_blocks=1 01:14:57.160 --rc geninfo_unexecuted_blocks=1 01:14:57.160 01:14:57.160 ' 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 01:14:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:57.160 --rc genhtml_branch_coverage=1 01:14:57.160 --rc genhtml_function_coverage=1 01:14:57.160 --rc genhtml_legend=1 01:14:57.160 --rc geninfo_all_blocks=1 01:14:57.160 --rc geninfo_unexecuted_blocks=1 01:14:57.160 01:14:57.160 ' 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 01:14:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:57.160 --rc genhtml_branch_coverage=1 01:14:57.160 --rc genhtml_function_coverage=1 01:14:57.160 --rc genhtml_legend=1 01:14:57.160 --rc geninfo_all_blocks=1 01:14:57.160 --rc geninfo_unexecuted_blocks=1 01:14:57.160 01:14:57.160 ' 01:14:57.160 06:13:51 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 01:14:57.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 01:14:57.160 --rc genhtml_branch_coverage=1 01:14:57.160 --rc genhtml_function_coverage=1 01:14:57.160 --rc genhtml_legend=1 01:14:57.160 --rc geninfo_all_blocks=1 01:14:57.160 --rc geninfo_unexecuted_blocks=1 01:14:57.160 01:14:57.160 ' 01:14:57.160 06:13:51 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4083adec-450d-4b97-8986-2f4423606fc2 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=4083adec-450d-4b97-8986-2f4423606fc2 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:14:57.160 06:13:51 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:14:57.160 06:13:51 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:57.160 06:13:51 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:57.160 06:13:51 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:57.160 06:13:51 keyring_linux -- paths/export.sh@5 -- # export PATH 01:14:57.160 06:13:51 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@51 -- # : 0 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 01:14:57.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:14:57.160 06:13:51 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:14:57.160 06:13:51 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:14:57.160 06:13:51 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:14:57.160 06:13:51 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:14:57.160 06:13:51 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:14:57.160 06:13:51 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:14:57.160 06:13:51 keyring_linux -- nvmf/common.sh@733 -- # python - 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:14:57.160 /tmp/:spdk-test:key0 01:14:57.160 06:13:51 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:14:57.160 06:13:51 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:14:57.161 06:13:51 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:14:57.161 06:13:51 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:14:57.161 06:13:51 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:14:57.161 06:13:51 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:14:57.161 06:13:51 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:14:57.161 06:13:51 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:14:57.161 06:13:51 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:14:57.161 06:13:51 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 01:14:57.161 06:13:51 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 01:14:57.161 06:13:51 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 01:14:57.161 06:13:51 keyring_linux -- nvmf/common.sh@732 -- # digest=0 01:14:57.161 06:13:51 keyring_linux -- nvmf/common.sh@733 -- # python - 01:14:57.421 06:13:51 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:14:57.421 /tmp/:spdk-test:key1 01:14:57.421 06:13:51 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:14:57.421 06:13:51 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=110284 01:14:57.421 06:13:51 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 01:14:57.421 06:13:51 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 110284 01:14:57.421 06:13:51 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110284 ']' 01:14:57.421 06:13:51 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 01:14:57.421 06:13:51 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:57.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:14:57.421 06:13:51 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:14:57.421 06:13:51 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:57.421 06:13:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:14:57.421 [2024-12-09 06:13:51.836736] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:14:57.421 [2024-12-09 06:13:51.836862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110284 ] 01:14:57.421 [2024-12-09 06:13:51.981822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:57.680 [2024-12-09 06:13:52.013747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:14:57.680 06:13:52 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:14:57.680 [2024-12-09 06:13:52.198226] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:14:57.680 null0 01:14:57.680 [2024-12-09 06:13:52.230167] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:14:57.680 [2024-12-09 06:13:52.230360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 01:14:57.680 06:13:52 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:14:57.680 907426336 01:14:57.680 06:13:52 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:14:57.680 820527201 01:14:57.680 06:13:52 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=110312 01:14:57.680 06:13:52 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:14:57.680 06:13:52 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 110312 /var/tmp/bperf.sock 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110312 ']' 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 01:14:57.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 01:14:57.680 06:13:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:14:57.938 [2024-12-09 06:13:52.318569] Starting SPDK v25.01-pre git sha1 15ce1ba92 / DPDK 24.03.0 initialization... 01:14:57.938 [2024-12-09 06:13:52.318686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110312 ] 01:14:57.938 [2024-12-09 06:13:52.470353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 01:14:57.938 [2024-12-09 06:13:52.510340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 01:14:58.872 06:13:53 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 01:14:58.872 06:13:53 keyring_linux -- common/autotest_common.sh@868 -- # return 0 01:14:58.872 06:13:53 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:14:58.872 06:13:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:14:59.129 06:13:53 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:14:59.129 06:13:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:14:59.387 06:13:53 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:14:59.387 06:13:53 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:14:59.648 [2024-12-09 06:13:54.111884] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:14:59.648 nvme0n1 01:14:59.648 06:13:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:14:59.648 06:13:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:14:59.648 06:13:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:14:59.648 06:13:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:14:59.648 06:13:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:14:59.648 06:13:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:15:00.212 06:13:54 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:15:00.212 06:13:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:15:00.212 06:13:54 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:15:00.212 06:13:54 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:15:00.212 06:13:54 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:15:00.212 06:13:54 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:00.212 06:13:54 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:15:00.212 06:13:54 keyring_linux -- keyring/linux.sh@25 -- # sn=907426336 01:15:00.212 06:13:54 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:15:00.213 06:13:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:15:00.213 06:13:54 keyring_linux -- keyring/linux.sh@26 -- # [[ 907426336 == \9\0\7\4\2\6\3\3\6 ]] 01:15:00.213 06:13:54 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 907426336 01:15:00.213 06:13:54 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:15:00.213 06:13:54 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:15:00.470 Running I/O for 1 seconds... 01:15:01.404 11218.00 IOPS, 43.82 MiB/s 01:15:01.404 Latency(us) 01:15:01.404 [2024-12-09T06:13:55.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:01.404 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:15:01.404 nvme0n1 : 1.01 11222.31 43.84 0.00 0.00 11339.96 8519.68 21328.99 01:15:01.404 [2024-12-09T06:13:55.990Z] =================================================================================================================== 01:15:01.404 [2024-12-09T06:13:55.990Z] Total : 11222.31 43.84 0.00 0.00 11339.96 8519.68 21328.99 01:15:01.404 { 01:15:01.404 "results": [ 01:15:01.404 { 01:15:01.404 "job": "nvme0n1", 01:15:01.404 "core_mask": "0x2", 01:15:01.404 "workload": "randread", 01:15:01.404 "status": "finished", 01:15:01.404 "queue_depth": 128, 01:15:01.404 "io_size": 4096, 01:15:01.404 "runtime": 1.011022, 01:15:01.404 "iops": 11222.307724263172, 01:15:01.404 "mibps": 43.837139547903014, 01:15:01.404 "io_failed": 0, 01:15:01.404 "io_timeout": 0, 01:15:01.404 "avg_latency_us": 11339.955486755443, 01:15:01.404 "min_latency_us": 8519.68, 01:15:01.404 "max_latency_us": 21328.98909090909 01:15:01.404 } 01:15:01.404 ], 01:15:01.404 "core_count": 1 01:15:01.404 } 01:15:01.404 06:13:55 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:15:01.405 06:13:55 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:15:01.663 06:13:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:15:01.663 06:13:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:15:01.663 06:13:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:15:01.663 06:13:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:15:01.663 06:13:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:15:01.663 06:13:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:15:02.268 06:13:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:15:02.268 06:13:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:15:02.268 06:13:56 keyring_linux -- keyring/linux.sh@23 -- # return 01:15:02.268 06:13:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:15:02.268 06:13:56 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 01:15:02.268 06:13:56 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:15:02.268 06:13:56 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 01:15:02.268 06:13:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:02.268 06:13:56 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 01:15:02.268 06:13:56 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 01:15:02.268 06:13:56 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:15:02.268 06:13:56 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:15:02.268 [2024-12-09 06:13:56.844687] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:15:02.268 [2024-12-09 06:13:56.845441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ec1f0 (107): Transport endpoint is not connected 01:15:02.268 [2024-12-09 06:13:56.846431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ec1f0 (9): Bad file descriptor 01:15:02.268 [2024-12-09 06:13:56.847427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 01:15:02.268 [2024-12-09 06:13:56.847449] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:15:02.268 [2024-12-09 06:13:56.847460] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 01:15:02.268 [2024-12-09 06:13:56.847470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 01:15:02.268 2024/12/09 06:13:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 01:15:02.268 request: 01:15:02.268 { 01:15:02.268 "method": "bdev_nvme_attach_controller", 01:15:02.268 "params": { 01:15:02.268 "name": "nvme0", 01:15:02.268 "trtype": "tcp", 01:15:02.268 "traddr": "127.0.0.1", 01:15:02.268 "adrfam": "ipv4", 01:15:02.268 "trsvcid": "4420", 01:15:02.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:15:02.268 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:15:02.268 "prchk_reftag": false, 01:15:02.268 "prchk_guard": false, 01:15:02.268 "hdgst": false, 01:15:02.268 "ddgst": false, 01:15:02.268 "psk": ":spdk-test:key1", 01:15:02.268 "allow_unrecognized_csi": false 01:15:02.268 } 01:15:02.268 } 01:15:02.268 Got JSON-RPC error response 01:15:02.268 GoRPCClient: error on JSON-RPC call 01:15:02.526 06:13:56 keyring_linux -- common/autotest_common.sh@655 -- # es=1 01:15:02.526 06:13:56 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@33 -- # sn=907426336 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 907426336 01:15:02.527 1 links removed 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@33 -- # sn=820527201 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 820527201 01:15:02.527 1 links removed 01:15:02.527 06:13:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 110312 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110312 ']' 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110312 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110312 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 01:15:02.527 killing process with pid 110312 01:15:02.527 Received shutdown signal, test time was about 1.000000 seconds 01:15:02.527 01:15:02.527 Latency(us) 01:15:02.527 [2024-12-09T06:13:57.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:15:02.527 [2024-12-09T06:13:57.113Z] =================================================================================================================== 01:15:02.527 [2024-12-09T06:13:57.113Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110312' 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@973 -- # kill 110312 01:15:02.527 06:13:56 keyring_linux -- common/autotest_common.sh@978 -- # wait 110312 01:15:02.527 06:13:57 keyring_linux -- keyring/linux.sh@42 -- # killprocess 110284 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110284 ']' 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110284 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@959 -- # uname 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110284 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 01:15:02.527 killing process with pid 110284 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110284' 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@973 -- # kill 110284 01:15:02.527 06:13:57 keyring_linux -- common/autotest_common.sh@978 -- # wait 110284 01:15:02.785 ************************************ 01:15:02.785 END TEST keyring_linux 01:15:02.785 ************************************ 01:15:02.785 01:15:02.785 real 0m5.895s 01:15:02.785 user 0m12.337s 01:15:02.785 sys 0m1.398s 01:15:02.785 06:13:57 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 01:15:02.785 06:13:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:15:03.043 06:13:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 01:15:03.043 06:13:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 01:15:03.043 06:13:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 01:15:03.043 06:13:57 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 01:15:03.043 06:13:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 01:15:03.043 06:13:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 01:15:03.044 06:13:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 01:15:03.044 06:13:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 01:15:03.044 06:13:57 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 01:15:03.044 06:13:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 01:15:03.044 06:13:57 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 01:15:03.044 06:13:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 01:15:03.044 06:13:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 01:15:03.044 06:13:57 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 01:15:03.044 06:13:57 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 01:15:03.044 06:13:57 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 01:15:03.044 06:13:57 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 01:15:03.044 06:13:57 -- common/autotest_common.sh@726 -- # xtrace_disable 01:15:03.044 06:13:57 -- common/autotest_common.sh@10 -- # set +x 01:15:03.044 06:13:57 -- spdk/autotest.sh@388 -- # autotest_cleanup 01:15:03.044 06:13:57 -- common/autotest_common.sh@1396 -- # local autotest_es=0 01:15:03.044 06:13:57 -- common/autotest_common.sh@1397 -- # xtrace_disable 01:15:03.044 06:13:57 -- common/autotest_common.sh@10 -- # set +x 01:15:04.947 INFO: APP EXITING 01:15:04.947 INFO: killing all VMs 01:15:04.947 INFO: killing vhost app 01:15:04.947 INFO: EXIT DONE 01:15:05.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:05.514 0000:00:11.0 (1b36 0010): Already using the nvme driver 01:15:05.514 0000:00:10.0 (1b36 0010): Already using the nvme driver 01:15:06.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 01:15:06.450 Cleaning 01:15:06.450 Removing: /var/run/dpdk/spdk0/config 01:15:06.450 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:15:06.450 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:15:06.450 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:15:06.450 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:15:06.450 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:15:06.450 Removing: /var/run/dpdk/spdk0/hugepage_info 01:15:06.450 Removing: /var/run/dpdk/spdk1/config 01:15:06.450 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:15:06.450 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:15:06.450 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:15:06.450 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:15:06.450 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:15:06.450 Removing: /var/run/dpdk/spdk1/hugepage_info 01:15:06.450 Removing: /var/run/dpdk/spdk2/config 01:15:06.450 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:15:06.450 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:15:06.450 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:15:06.450 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:15:06.450 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:15:06.450 Removing: /var/run/dpdk/spdk2/hugepage_info 01:15:06.450 Removing: /var/run/dpdk/spdk3/config 01:15:06.450 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:15:06.450 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:15:06.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:15:06.451 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:15:06.451 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:15:06.451 Removing: /var/run/dpdk/spdk3/hugepage_info 01:15:06.451 Removing: /var/run/dpdk/spdk4/config 01:15:06.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:15:06.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:15:06.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:15:06.451 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:15:06.451 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:15:06.451 Removing: /var/run/dpdk/spdk4/hugepage_info 01:15:06.451 Removing: /dev/shm/nvmf_trace.0 01:15:06.451 Removing: /dev/shm/spdk_tgt_trace.pid58395 01:15:06.451 Removing: /var/run/dpdk/spdk0 01:15:06.451 Removing: /var/run/dpdk/spdk1 01:15:06.451 Removing: /var/run/dpdk/spdk2 01:15:06.451 Removing: /var/run/dpdk/spdk3 01:15:06.451 Removing: /var/run/dpdk/spdk4 01:15:06.451 Removing: /var/run/dpdk/spdk_pid100196 01:15:06.451 Removing: /var/run/dpdk/spdk_pid100236 01:15:06.451 Removing: /var/run/dpdk/spdk_pid100579 01:15:06.451 Removing: /var/run/dpdk/spdk_pid100614 01:15:06.451 Removing: /var/run/dpdk/spdk_pid101008 01:15:06.451 Removing: /var/run/dpdk/spdk_pid101572 01:15:06.451 Removing: /var/run/dpdk/spdk_pid102011 01:15:06.451 Removing: /var/run/dpdk/spdk_pid103007 01:15:06.451 Removing: /var/run/dpdk/spdk_pid104044 01:15:06.451 Removing: /var/run/dpdk/spdk_pid104151 01:15:06.451 Removing: /var/run/dpdk/spdk_pid104218 01:15:06.451 Removing: /var/run/dpdk/spdk_pid105814 01:15:06.451 Removing: /var/run/dpdk/spdk_pid106115 01:15:06.451 Removing: /var/run/dpdk/spdk_pid106451 01:15:06.451 Removing: /var/run/dpdk/spdk_pid107010 01:15:06.451 Removing: /var/run/dpdk/spdk_pid107021 01:15:06.451 Removing: /var/run/dpdk/spdk_pid107415 01:15:06.451 Removing: /var/run/dpdk/spdk_pid107574 01:15:06.451 Removing: /var/run/dpdk/spdk_pid107728 01:15:06.451 Removing: /var/run/dpdk/spdk_pid107825 01:15:06.451 Removing: /var/run/dpdk/spdk_pid107976 01:15:06.451 Removing: /var/run/dpdk/spdk_pid108084 01:15:06.451 Removing: /var/run/dpdk/spdk_pid108787 01:15:06.451 Removing: /var/run/dpdk/spdk_pid108823 01:15:06.451 Removing: /var/run/dpdk/spdk_pid108858 01:15:06.451 Removing: /var/run/dpdk/spdk_pid109108 01:15:06.451 Removing: /var/run/dpdk/spdk_pid109144 01:15:06.451 Removing: /var/run/dpdk/spdk_pid109174 01:15:06.451 Removing: /var/run/dpdk/spdk_pid109644 01:15:06.451 Removing: /var/run/dpdk/spdk_pid109662 01:15:06.451 Removing: /var/run/dpdk/spdk_pid110126 01:15:06.451 Removing: /var/run/dpdk/spdk_pid110284 01:15:06.451 Removing: /var/run/dpdk/spdk_pid110312 01:15:06.451 Removing: /var/run/dpdk/spdk_pid58248 01:15:06.451 Removing: /var/run/dpdk/spdk_pid58395 01:15:06.451 Removing: /var/run/dpdk/spdk_pid58651 01:15:06.451 Removing: /var/run/dpdk/spdk_pid58743 01:15:06.451 Removing: /var/run/dpdk/spdk_pid58777 01:15:06.451 Removing: /var/run/dpdk/spdk_pid58887 01:15:06.451 Removing: /var/run/dpdk/spdk_pid58903 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59037 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59319 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59503 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59593 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59674 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59770 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59810 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59840 01:15:06.451 Removing: /var/run/dpdk/spdk_pid59905 01:15:06.451 Removing: /var/run/dpdk/spdk_pid60009 01:15:06.451 Removing: /var/run/dpdk/spdk_pid60629 01:15:06.451 Removing: /var/run/dpdk/spdk_pid60674 01:15:06.451 Removing: /var/run/dpdk/spdk_pid60743 01:15:06.451 Removing: /var/run/dpdk/spdk_pid60752 01:15:06.451 Removing: /var/run/dpdk/spdk_pid60831 01:15:06.451 Removing: /var/run/dpdk/spdk_pid60840 01:15:06.709 Removing: /var/run/dpdk/spdk_pid60919 01:15:06.709 Removing: /var/run/dpdk/spdk_pid60928 01:15:06.709 Removing: /var/run/dpdk/spdk_pid60980 01:15:06.709 Removing: /var/run/dpdk/spdk_pid60996 01:15:06.709 Removing: /var/run/dpdk/spdk_pid61042 01:15:06.709 Removing: /var/run/dpdk/spdk_pid61072 01:15:06.709 Removing: /var/run/dpdk/spdk_pid61213 01:15:06.709 Removing: /var/run/dpdk/spdk_pid61243 01:15:06.709 Removing: /var/run/dpdk/spdk_pid61326 01:15:06.709 Removing: /var/run/dpdk/spdk_pid61779 01:15:06.709 Removing: /var/run/dpdk/spdk_pid62137 01:15:06.709 Removing: /var/run/dpdk/spdk_pid64639 01:15:06.709 Removing: /var/run/dpdk/spdk_pid64685 01:15:06.709 Removing: /var/run/dpdk/spdk_pid65025 01:15:06.709 Removing: /var/run/dpdk/spdk_pid65071 01:15:06.709 Removing: /var/run/dpdk/spdk_pid65479 01:15:06.709 Removing: /var/run/dpdk/spdk_pid66070 01:15:06.709 Removing: /var/run/dpdk/spdk_pid66500 01:15:06.709 Removing: /var/run/dpdk/spdk_pid67503 01:15:06.709 Removing: /var/run/dpdk/spdk_pid68550 01:15:06.709 Removing: /var/run/dpdk/spdk_pid68667 01:15:06.709 Removing: /var/run/dpdk/spdk_pid68735 01:15:06.709 Removing: /var/run/dpdk/spdk_pid70314 01:15:06.709 Removing: /var/run/dpdk/spdk_pid70667 01:15:06.709 Removing: /var/run/dpdk/spdk_pid74487 01:15:06.709 Removing: /var/run/dpdk/spdk_pid74904 01:15:06.709 Removing: /var/run/dpdk/spdk_pid75502 01:15:06.709 Removing: /var/run/dpdk/spdk_pid76007 01:15:06.709 Removing: /var/run/dpdk/spdk_pid81715 01:15:06.709 Removing: /var/run/dpdk/spdk_pid82195 01:15:06.709 Removing: /var/run/dpdk/spdk_pid82304 01:15:06.709 Removing: /var/run/dpdk/spdk_pid82449 01:15:06.709 Removing: /var/run/dpdk/spdk_pid82501 01:15:06.709 Removing: /var/run/dpdk/spdk_pid82554 01:15:06.709 Removing: /var/run/dpdk/spdk_pid82593 01:15:06.709 Removing: /var/run/dpdk/spdk_pid82738 01:15:06.709 Removing: /var/run/dpdk/spdk_pid82879 01:15:06.709 Removing: /var/run/dpdk/spdk_pid83160 01:15:06.709 Removing: /var/run/dpdk/spdk_pid83275 01:15:06.709 Removing: /var/run/dpdk/spdk_pid83516 01:15:06.709 Removing: /var/run/dpdk/spdk_pid83622 01:15:06.709 Removing: /var/run/dpdk/spdk_pid83757 01:15:06.709 Removing: /var/run/dpdk/spdk_pid84129 01:15:06.709 Removing: /var/run/dpdk/spdk_pid84559 01:15:06.709 Removing: /var/run/dpdk/spdk_pid84560 01:15:06.709 Removing: /var/run/dpdk/spdk_pid84561 01:15:06.709 Removing: /var/run/dpdk/spdk_pid84850 01:15:06.709 Removing: /var/run/dpdk/spdk_pid85115 01:15:06.709 Removing: /var/run/dpdk/spdk_pid85517 01:15:06.709 Removing: /var/run/dpdk/spdk_pid85843 01:15:06.709 Removing: /var/run/dpdk/spdk_pid86429 01:15:06.709 Removing: /var/run/dpdk/spdk_pid86431 01:15:06.709 Removing: /var/run/dpdk/spdk_pid86829 01:15:06.709 Removing: /var/run/dpdk/spdk_pid86843 01:15:06.709 Removing: /var/run/dpdk/spdk_pid86863 01:15:06.709 Removing: /var/run/dpdk/spdk_pid86890 01:15:06.709 Removing: /var/run/dpdk/spdk_pid86905 01:15:06.709 Removing: /var/run/dpdk/spdk_pid87295 01:15:06.709 Removing: /var/run/dpdk/spdk_pid87344 01:15:06.709 Removing: /var/run/dpdk/spdk_pid87724 01:15:06.709 Removing: /var/run/dpdk/spdk_pid87963 01:15:06.709 Removing: /var/run/dpdk/spdk_pid88494 01:15:06.709 Removing: /var/run/dpdk/spdk_pid89099 01:15:06.709 Removing: /var/run/dpdk/spdk_pid90532 01:15:06.709 Removing: /var/run/dpdk/spdk_pid91176 01:15:06.709 Removing: /var/run/dpdk/spdk_pid91178 01:15:06.709 Removing: /var/run/dpdk/spdk_pid93236 01:15:06.709 Removing: /var/run/dpdk/spdk_pid93307 01:15:06.709 Removing: /var/run/dpdk/spdk_pid93384 01:15:06.709 Removing: /var/run/dpdk/spdk_pid93456 01:15:06.710 Removing: /var/run/dpdk/spdk_pid93586 01:15:06.710 Removing: /var/run/dpdk/spdk_pid93657 01:15:06.710 Removing: /var/run/dpdk/spdk_pid93734 01:15:06.710 Removing: /var/run/dpdk/spdk_pid93811 01:15:06.710 Removing: /var/run/dpdk/spdk_pid94190 01:15:06.710 Removing: /var/run/dpdk/spdk_pid94931 01:15:06.710 Removing: /var/run/dpdk/spdk_pid96330 01:15:06.710 Removing: /var/run/dpdk/spdk_pid96523 01:15:06.710 Removing: /var/run/dpdk/spdk_pid96800 01:15:06.710 Removing: /var/run/dpdk/spdk_pid97342 01:15:06.710 Removing: /var/run/dpdk/spdk_pid97727 01:15:06.710 Clean 01:15:06.968 06:14:01 -- common/autotest_common.sh@1453 -- # return 0 01:15:06.968 06:14:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 01:15:06.968 06:14:01 -- common/autotest_common.sh@732 -- # xtrace_disable 01:15:06.968 06:14:01 -- common/autotest_common.sh@10 -- # set +x 01:15:06.968 06:14:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 01:15:06.968 06:14:01 -- common/autotest_common.sh@732 -- # xtrace_disable 01:15:06.968 06:14:01 -- common/autotest_common.sh@10 -- # set +x 01:15:06.968 06:14:01 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:15:06.968 06:14:01 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 01:15:06.968 06:14:01 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 01:15:06.968 06:14:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 01:15:06.968 06:14:01 -- spdk/autotest.sh@398 -- # hostname 01:15:06.968 06:14:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 01:15:07.226 geninfo: WARNING: invalid characters removed from testname! 01:15:33.766 06:14:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:15:35.141 06:14:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:15:38.452 06:14:32 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:15:40.986 06:14:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:15:43.518 06:14:37 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:15:46.050 06:14:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 01:15:48.643 06:14:43 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:15:48.643 06:14:43 -- spdk/autorun.sh@1 -- $ timing_finish 01:15:48.643 06:14:43 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 01:15:48.643 06:14:43 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:15:48.643 06:14:43 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 01:15:48.643 06:14:43 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:15:48.643 + [[ -n 5260 ]] 01:15:48.643 + sudo kill 5260 01:15:48.652 [Pipeline] } 01:15:48.668 [Pipeline] // timeout 01:15:48.673 [Pipeline] } 01:15:48.690 [Pipeline] // stage 01:15:48.695 [Pipeline] } 01:15:48.709 [Pipeline] // catchError 01:15:48.719 [Pipeline] stage 01:15:48.721 [Pipeline] { (Stop VM) 01:15:48.735 [Pipeline] sh 01:15:49.015 + vagrant halt 01:15:52.322 ==> default: Halting domain... 01:15:58.892 [Pipeline] sh 01:15:59.169 + vagrant destroy -f 01:16:02.493 ==> default: Removing domain... 01:16:02.505 [Pipeline] sh 01:16:02.787 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 01:16:02.796 [Pipeline] } 01:16:02.815 [Pipeline] // stage 01:16:02.821 [Pipeline] } 01:16:02.839 [Pipeline] // dir 01:16:02.844 [Pipeline] } 01:16:02.861 [Pipeline] // wrap 01:16:02.868 [Pipeline] } 01:16:02.882 [Pipeline] // catchError 01:16:02.892 [Pipeline] stage 01:16:02.895 [Pipeline] { (Epilogue) 01:16:02.908 [Pipeline] sh 01:16:03.189 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:16:09.765 [Pipeline] catchError 01:16:09.767 [Pipeline] { 01:16:09.788 [Pipeline] sh 01:16:10.077 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:16:10.338 Artifacts sizes are good 01:16:10.346 [Pipeline] } 01:16:10.356 [Pipeline] // catchError 01:16:10.364 [Pipeline] archiveArtifacts 01:16:10.370 Archiving artifacts 01:16:10.502 [Pipeline] cleanWs 01:16:10.511 [WS-CLEANUP] Deleting project workspace... 01:16:10.511 [WS-CLEANUP] Deferred wipeout is used... 01:16:10.516 [WS-CLEANUP] done 01:16:10.518 [Pipeline] } 01:16:10.528 [Pipeline] // stage 01:16:10.533 [Pipeline] } 01:16:10.543 [Pipeline] // node 01:16:10.547 [Pipeline] End of Pipeline 01:16:10.573 Finished: SUCCESS